A new Hierarchical Bayes approach to ensemble-variational data assimilation

Size: px
Start display at page:

Download "A new Hierarchical Bayes approach to ensemble-variational data assimilation"

Transcription

1 A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander Rakitko (HMC) A new Hierarchical Bayes approach to ensemble-variational College Park, data 20 assimilation Oct / 48

2 Outline 1 Introduction and motivation 2 Methodological problems in the existing data assimilation approaches we intend to alleviate with the new technique 3 Hierarchical Bayes: principle 4 Hierarchical Bayes EnVar 5 HB-EnVar: analysis algorithms 6 HB-EnVar: first performance results Michael Tsyrulnikov and Alexander Rakitko (HMC) A new Hierarchical Bayes approach to ensemble-variational College Park, data 20 assimilation Oct / 48

3 Introduction and motivation Michael Tsyrulnikov and Alexander Rakitko (HMC) A new Hierarchical Bayes approach to ensemble-variational College Park, data 20 assimilation Oct / 48

4 Sequential assimilation in a nutshell: setup Discrete-time observed dynamical system: To be recovered/estimated: Evolution of hidden truth: x 1, x 2, x 3,... We are given: 1 Forecast model x f k+1 = F k(x a k ) Evolution of truth: x k+1 = F k (x k ) + ε k 2 Observations y 3 Observation operator y = H k (x) + η k The goal in filtering: compute p(x k y :k ) Michael Tsyrulnikov and Alexander Rakitko (HMC) A new Hierarchical Bayes approach to ensemble-variational College Park, data 20 assimilation Oct / 48

5 Sequential assimilation in a nutshell: cycling Two-step cycling 1 Time update (forecast): from p(x k 1 y :k 1 ) to p(x k y :k 1 ) 2 Observation update (analysis): from p(x k y :k 1 ) to p(x k y :k ) In the Gaussian case (and linear forecast model), we need to update the mean m and the covariance matrix Γ. Michael Tsyrulnikov and Alexander Rakitko (HMC) A new Hierarchical Bayes approach to ensemble-variational College Park, data 20 assimilation Oct / 48

6 Sequential assimilation in a nutshell: Kalman filter Linear and Gaussian model: x k+1 = F k x k + ε k y = H k x + η k The Kalman filter equations: Primary filter, forecast: x f k = F k 1x a k 1 Primary filter, analysis: x a k = xf k + K(y H kx f k ) where K = BH (HBH + R) 1 and B Γ f k Secondary filter, forecast: Γ f k B k = F k 1 Γ a k F k 1 + Q k = (I KH)B Secondary filter, analysis: Γ a k Two problems with the KF: (1) Too expensive EnKF (2) No feedback primary filter secondary filter Michael Tsyrulnikov and Alexander Rakitko (HMC) A new Hierarchical Bayes approach to ensemble-variational College Park, data 20 assimilation Oct / 48

7 Sequential assimilation in a nutshell: EnKF In the Kalman gain, K = BH (HBH + R) 1, B is not computed but estimated from the background ensemble X e = {x e 1 xe,..., x e N xe }: B = 1 N 1 Xe {X e } the analysis increment appears to belong to the ensemble space (spanned by columns of X e, i.e. the background ensemble perturbations) only N 1 observations can be fitted, ensemble covariances are noisy. the need for localization (e.g. by dividing the domain into sub-domains, covariance tapering: B L, etc.), which is not optimized. Michael Tsyrulnikov and Alexander Rakitko (HMC) A new Hierarchical Bayes approach to ensemble-variational College Park, data 20 assimilation Oct / 48

8 Sequential assimilation in a nutshell: Var In the gain matrix K = BH (HBH + R) 1 B is not computed on-line and not estimated on-line but estimated off-line from an archive. Advantages: a static model (B 0 ) for B is observable and has normally full rank. Disadvantages: no dependence on the atmospheric flow. Michael Tsyrulnikov and Alexander Rakitko (HMC) A new Hierarchical Bayes approach to ensemble-variational College Park, data 20 assimilation Oct / 48

9 Sequential assimilation in a nutshell: EnVar As both static and flow-dependent B are imperfect, let us combine them: B := wb 0 + (1 w)b EnKF And then, again, apply the same analysis equation: x a = x b + K(y Hx b ), where K = BH (HBH + R) 1 A weakness in the EnVar: Taking a linear combination of static and ensemble covariances to specify B is simplistic and, most likely, not optimal. A problem common to all the above approaches: The analysis is optimal, provided that B is precisely known. But it isn t.. Michael Tsyrulnikov and Alexander Rakitko (HMC) A new Hierarchical Bayes approach to ensemble-variational College Park, data 20 assimilation Oct / 48

10 Summary of Motivation: Methodological problems in the existing data assimilation approaches we intend to alleviate with the new technique 1 All existing Var, EnKF, and EnVar analysis equations assume that the effective background-error covariance matrix B is exact. But this is never the case. 2 EnVar takes a linear combination of static and ensemble covariances to specify B. This is ad hoc. 3 EnKF and EnVar use an ad-hoc localization. This is not theoretically optimal. 4 In the Var, EnKF, and EnVar analysis equations, there is no intrinsic feedback from observations to background-error statistics. This requires external adaptation or manual tuning. Michael Tsyrulnikov and Alexander Rakitko (HMC) A new Hierarchical Bayes approach to ensemble-variational College Park, data 20 assimilation Oct / 48

11 Hierarchical Bayes: principle Oct / 48

12 Statistical basics: Frequentist, Bayes, Hierarchical Bayes The parameter estimation problem. 1 Frequentist (non-bayesian) Setup: p(y; θ), where θ is deterministic. Estimation: e.g. maximum likelihood: ^θ = argmax θ p(y; θ) 2 Bayes Setup: p(y θ), where θ is random. Specify the prior p(θ) and look at the posterior: p(θ y) p(θ) p(y θ). Estimation: e.g. maximum posterior probability: ^θ = argmax θ p(θ y). 3 Hierarchical Bayes Setup: p(θ) = p(θ β), where β is random. Specify the hyper-prior p(β) and look at the posterior: p(θ, β y) p(β) p(θ β) p(y θ). Estimation: e.g. (^θ, ^β) = argmax θ,β p(θ, β y). Oct / 48

13 DA: Frequentist and Bayes The analysis step: =The parameter is θ x =The observational likelihood is p(y x). 1 Bayes Setup: The state (parameter) is random. The background affects its prior: p(x x b ). Other parameters of the prior (m, B) are specified. The posterior: p(x x b, y) p(x x b )p(y x) e 1 2 [(x xb ) B 1 (x x b )+(y H(x)) R 1 (y H(x))] 2 Frequentist (non-bayesian) Setup: The background x b is regarded as part of observations: z := (x b, y), where x is deterministic. Oct / 48

14 DA: Hierarchical Bayes For Gaussian distributions, the hyper-parameter is β m, B. Why hyper-priors in DA now? Because on the one hand, the hyper-parameter B remains largely uncertain, and on the other hand, it has become observable with the advent of ensemble techniques. Prior: p(x) = p(x m, B) Hyper-prior: p(m, B) Joint posterior: p(x, m, B x b, y) Oct / 48

15 Hierarchical Bayes analysis: principle We stop specifying the hyper-parameters m = x b and B of the prior distribution of x. We admit that these hyper-parameters, the (true) mean vector m and the (true) covariance matrix B are uncertain and random. We specify priors for the hyper-parameters m, B. In the analysis, we estimate m, B from all the available information along with the state x. Oct / 48

16 Hierarchical Bayes EnVar Oct / 48

17 Hierarchical Bayes EnVar (HB-EnVar): principle In this talk, we focus on the analysis step. We update m and B along with the state x, given the deterministic background x f, the background ensemble X e, and observations y. The joint posterior is p(x, m, B x f, X e, y) p(m x f )p(b)p(x m, B)p(X e m, B)p(y x) The primary goal is the posterior distribution of x and its mean ^x in particular. Oct / 48

18 The traditional terms The prior state conditional distribution: The observational likelihood: p(x m, B) 1 B 1/2 e 1 2 (x m) B 1 (x m) p(y x) e 1 2 (y Hx) R 1 (y Hx) Oct / 48

19 Ensemble likelihood p(x e m, B) 1 B N 2 e 1 N 2 k=1 (xe k m) B 1 (x e k m) no need and no room for approximations. NB: Ensemble members are treated as observations on B. Oct / 48

20 Prior pdf for B So, the only input to the new optimal extended-space analysis technique is the prior distributions p(m x f ) and p(b). This is, perhaps, the good news because these can be, in principle, retrieved from an archive of (adequate) ensembles. Oct / 48

21 Matrix variate probability distributions Vectorization. General matrix: X = vec X := x 1... x n Sparse matrix: include in X only non-zero entries (i.e. in the matrix support). p(x) is identified with p(vec X). Oct / 48

22 Matrix variate Gaussian distribution X is matrix variate Gaussian distributed if X = vec X is the multivariate Gaussian vector with mean M and covariance matrix U V, where U and V are non-random symmetric non-negative definite matrices and U V := U 11 V... U 1n V U n1 V... U nn V The resulting pdf is p(x) e 1 2 tr[(x M)U 1 (X M) V 1 ] Covariances between matrix entries: Cov (X i,j, X i,j ) = V i,i U j,j Simulation: X = M + ΦYΨ, where Φ and Ψ are such that V = ΦΦ and U = ΨΨ, and Y is the pure noise random matrix. Oct / 48

23 Selecting the prior pdf for B: requirements 1 The distribution family should be suitable for modeling random covariance matrices. 2 The distribution family should be rich enough to give rise to realistically complex case-to-case variable background-error covariances. 3 For efficient Monte-Carlo sampling, the distribution should have analytically tractable and fast computable pdf. Oct / 48

24 The candidate probability distributions for B Discrete distribution Truncated Gaussian Wishart / Inverse Wishart Square-root Gaussian (Parametric covariance model with random parameters) Oct / 48

25 Discrete distribution p(b) = M w m δ(b B m ) m=1 Too restrictive, especially if B m are from a climatic archive. Can be used in the cycling mode. Leads to a kind of particle filter at the covariance level. Oct / 48

26 Wishart and Inverse Wishart distributions Conjugate priors for Gaussian likelihoods. The Wishart distribution for the precision matrix C: p(c) C θ 2 exp{ 1 2 tr(θb 0C)} Not flexible enough: only one matrix-variate parameter (B 0 ) for both location and scale of the distribution. The second parameter for scale (dispersion) is only scalar valued (θ). Oct / 48

27 Square-root Gaussian distribution our current choice In the factorization B = WW postulate that W is a Gaussian random matrix. Its pdf is p(w) e 1 2 tr[(w W 0)U 1 (W W 0 ) U 1 ] Oct / 48

28 A random sample from p(b): a row of the random matrix Oct / 48

29 A random sample from p(b): a row of the random matrix Oct / 48

30 A random sample from p(b): a row of the random matrix Oct / 48

31 A random sample from p(b): a row of the random matrix Oct / 48

32 A random sample from p(b): a row of the random matrix Oct / 48

33 A random sample from p(b): a row of the random matrix Oct / 48

34 A random sample from p(b): a row of the random matrix Oct / 48

35 A random sample from p(b) Oct / 48

36 A random sample from p(b) Oct / 48

37 A random sample from p(b) Oct / 48

38 A random sample from p(b) Oct / 48

39 HB-EnVar: analysis algorithms Oct / 48

40 The joint posterior pdf p post (x, W) p(w) p(x m, W) p(x e m, B) p(y x) J(x, W) := 2 log p post (x, W) = J W + J b + J e + J o Oct / 48

41 (1) Deterministic analysis: posterior mode analysis: analytic solution If we impose the Wishart prior for the precision matrix C, then there is an analytic solution for B: ^B = 1 θ + N + 1 (θb 0 + NS + A), where S is the sample covariance matrix and is the adaptivity matrix. A := (^x m)(^x m) T Oct / 48

42 Deterministic analysis: posterior mode analysis: numerical solution: 1-D argmax[p post (x, W)] = argmin[j(x, W)] Oct / 48

43 Posterior mode analysis: numerical solution Numerical optimization (quasi-newton). 1-D: unique mode. 4 and 8 grid points: 3 maxima. Important: with the sqrt-gaussian prior B distribution, the global mode (left) gives the localized W (without any kind of imposed localization!), in contrast to a local mode (right): Oct / 48

44 Analysis: (2) Importance sampling The posterior pdf in the marginalized form: p post (x, W) = p post (W) p post (x W) p post (x W) N (x a (m, W), B a (W)) Oct / 48

45 Importance sampling ^x = p post (W) x a (m, W) dw ^x = E x a p post (W) (m, W) = E q(w) x a (m, W) q(w) ^x x a := M w m x a (m, W + m ) m=1 =Selection of q: Gaussian pdf centered at the EnVar W EV. =Localization is achieved by introducing sparsity within the proposal distribution q for W. =The ordinary analysis step is included in the importance sampling analysis. =EnVar can be reproduced within HB-EnVar by nullifying the prior uncertainty in B. Oct / 48

46 HB-EnVar: first performance results Oct / 48

47 1-D illustrative example: dependence on the ensemble size RMSE of Analysis RMSE Exact B HB Mode HB Importance EnVar Var EnKf N (Ensemble size) In the toy problem, the deterministic HB-EnVar analyses outperform Var, EnKF, and EnVar. Oct / 48

48 Previous work Wikle C.K. and Berliner L.M. (2007) A Bayesian tutorial for data assimilation. Physica D, 230, 1 16: proposed to use the Hierarchical Bayesian paradigm to account for uncertainties in parameters of error statistics used in data assimilation. Myrseth I. and Omre H.(2010) Hierarchical Ensemble Kalman Filter. SPE Journal, v.15(2), : proposed within the EnKF paradigm to remove the assumption that the background-error covariance matrix B and the background-error mean field m are known deterministic quantities, replacing it by the assumption that these are uncertain and random. But they didn t use the ensemble likelihood. Bocquet M. (2011) Ensemble Kalman filtering without the intrinsic need for inflation. Nonlin. Processes Geophys., v. 18, : went further, separating, in EnKF, the distribution of the random matrix B (and random vector m) into the prior and the ensemble likelihood. But he used non-informative priors, whether we propose informative priors. Oct / 48

49 Conclusions Main aspects HB-EnVar Background-error covariance matrix B is treated as a sparse random matrix and updated in the optimal scheme along with the state. The key issue is the prior distribution of B. Ensemble members are treated as observations on the B matrix and assimilated along with ordinary observations. The technique is computationally expensive. Potential benefits of HB-EnVar Optimized hybridization of static and ensemble covariances. Optimized combination of x f and x e. Optimized covariance localization. Optimized feedback from y to the B matrix. Uncertainty in B is explicitly accounted for in the generation of the analysis ensemble, resulting in increased spread. Thank you! Oct / 48

Hierarchical Bayes Ensemble Kalman Filter

Hierarchical Bayes Ensemble Kalman Filter Hierarchical Bayes Ensemble Kalman Filter M Tsyrulnikov and A Rakitko HydroMetCenter of Russia Wrocław, 7 Sep 2015 M Tsyrulnikov and A Rakitko (HMC) Hierarchical Bayes Ensemble Kalman Filter Wrocław, 7

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

Fundamentals of Data Assimilation

Fundamentals of Data Assimilation National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)

More information

4. DATA ASSIMILATION FUNDAMENTALS

4. DATA ASSIMILATION FUNDAMENTALS 4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 2015 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO August 11-14, 2015 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Laura Slivinski June, 3 Laura Slivinski (Brown University) Lagrangian Data Assimilation June, 3 / 3 Data Assimilation Setup:

More information

Ensemble Data Assimilation and Uncertainty Quantification

Ensemble Data Assimilation and Uncertainty Quantification Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research pg 1 What is Data Assimilation? Observations combined with a Model forecast + to produce

More information

M.Sc. in Meteorology. Numerical Weather Prediction

M.Sc. in Meteorology. Numerical Weather Prediction M.Sc. in Meteorology UCD Numerical Weather Prediction Prof Peter Lynch Meteorology & Climate Cehtre School of Mathematical Sciences University College Dublin Second Semester, 2005 2006. Text for the Course

More information

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter arxiv:physics/0511236 v1 28 Nov 2005 Brian R. Hunt Institute for Physical Science and Technology and Department

More information

An Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation

An Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation An Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation A. Shah 1,2, M. E. Gharamti 1, L. Bertino 1 1 Nansen Environmental and Remote Sensing Center 2 University of Bergen

More information

Recap on Data Assimilation

Recap on Data Assimilation Concluding Thoughts Recap on Data Assimilation FORECAST ANALYSIS Kalman Filter Forecast Analysis Analytical projection of the ANALYSIS mean and cov from t-1 to the FORECAST mean and cov for t Update FORECAST

More information

Graphical Models for Collaborative Filtering

Graphical Models for Collaborative Filtering Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,

More information

Bayesian Inverse problem, Data assimilation and Localization

Bayesian Inverse problem, Data assimilation and Localization Bayesian Inverse problem, Data assimilation and Localization Xin T Tong National University of Singapore ICIP, Singapore 2018 X.Tong Localization 1 / 37 Content What is Bayesian inverse problem? What is

More information

Forecasting and data assimilation

Forecasting and data assimilation Supported by the National Science Foundation DMS Forecasting and data assimilation Outline Numerical models Kalman Filter Ensembles Douglas Nychka, Thomas Bengtsson, Chris Snyder Geophysical Statistics

More information

A data-driven method for improving the correlation estimation in serial ensemble Kalman filter

A data-driven method for improving the correlation estimation in serial ensemble Kalman filter A data-driven method for improving the correlation estimation in serial ensemble Kalman filter Michèle De La Chevrotière, 1 John Harlim 2 1 Department of Mathematics, Penn State University, 2 Department

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

Short tutorial on data assimilation

Short tutorial on data assimilation Mitglied der Helmholtz-Gemeinschaft Short tutorial on data assimilation 23 June 2015 Wolfgang Kurtz & Harrie-Jan Hendricks Franssen Institute of Bio- and Geosciences IBG-3 (Agrosphere), Forschungszentrum

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Filtering Strategies for Nonlinear Systems Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors

More information

The Canadian approach to ensemble prediction

The Canadian approach to ensemble prediction The Canadian approach to ensemble prediction ECMWF 2017 Annual seminar: Ensemble prediction : past, present and future. Pieter Houtekamer Montreal, Canada Overview. The Canadian approach. What are the

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

X t = a t + r t, (7.1)

X t = a t + r t, (7.1) Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical

More information

CSci 8980: Advanced Topics in Graphical Models Gaussian Processes

CSci 8980: Advanced Topics in Graphical Models Gaussian Processes CSci 8980: Advanced Topics in Graphical Models Gaussian Processes Instructor: Arindam Banerjee November 15, 2007 Gaussian Processes Outline Gaussian Processes Outline Parametric Bayesian Regression Gaussian

More information

Bayesian Statistics and Data Assimilation. Jonathan Stroud. Department of Statistics The George Washington University

Bayesian Statistics and Data Assimilation. Jonathan Stroud. Department of Statistics The George Washington University Bayesian Statistics and Data Assimilation Jonathan Stroud Department of Statistics The George Washington University 1 Outline Motivation Bayesian Statistics Parameter Estimation in Data Assimilation Combined

More information

DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation.

DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation. DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation. UCAR 2014 The National Center for Atmospheric Research is sponsored by the National

More information

The Inversion Problem: solving parameters inversion and assimilation problems

The Inversion Problem: solving parameters inversion and assimilation problems The Inversion Problem: solving parameters inversion and assimilation problems UE Numerical Methods Workshop Romain Brossier romain.brossier@univ-grenoble-alpes.fr ISTerre, Univ. Grenoble Alpes Master 08/09/2016

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Estimating Observation Impact in a Hybrid Data Assimilation System: Experiments with a Simple Model

Estimating Observation Impact in a Hybrid Data Assimilation System: Experiments with a Simple Model Estimating Observation Impact in a Hybrid Data Assimilation System: Experiments with a Simple Model NOAA / NCEP / EMC College Park, MD 20740, USA 10 February 2014 Overview Goal Sensitivity Theory Adjoint

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

Data Assimilation Research Testbed Tutorial

Data Assimilation Research Testbed Tutorial Data Assimilation Research Testbed Tutorial Section 2: How should observations of a state variable impact an unobserved state variable? Multivariate assimilation. Single observed variable, single unobserved

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

Local Ensemble Transform Kalman Filter

Local Ensemble Transform Kalman Filter Local Ensemble Transform Kalman Filter Brian Hunt 11 June 2013 Review of Notation Forecast model: a known function M on a vector space of model states. Truth: an unknown sequence {x n } of model states

More information

Adaptive Data Assimilation and Multi-Model Fusion

Adaptive Data Assimilation and Multi-Model Fusion Adaptive Data Assimilation and Multi-Model Fusion Pierre F.J. Lermusiaux, Oleg G. Logoutov and Patrick J. Haley Jr. Mechanical Engineering and Ocean Science and Engineering, MIT We thank: Allan R. Robinson

More information

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter arxiv:physics/0511236v2 [physics.data-an] 29 Dec 2006 Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter Brian R. Hunt Institute for Physical Science and Technology

More information

Relative Merits of 4D-Var and Ensemble Kalman Filter

Relative Merits of 4D-Var and Ensemble Kalman Filter Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29

More information

Variational data assimilation

Variational data assimilation Background and methods NCEO, Dept. of Meteorology, Univ. of Reading 710 March 2018, Univ. of Reading Bayes' Theorem Bayes' Theorem p(x y) = posterior distribution = p(x) p(y x) p(y) prior distribution

More information

State and Parameter Estimation in Stochastic Dynamical Models

State and Parameter Estimation in Stochastic Dynamical Models State and Parameter Estimation in Stochastic Dynamical Models Timothy DelSole George Mason University, Fairfax, Va and Center for Ocean-Land-Atmosphere Studies, Calverton, MD June 21, 2011 1 1 collaboration

More information

arxiv: v1 [physics.ao-ph] 23 Jan 2009

arxiv: v1 [physics.ao-ph] 23 Jan 2009 A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel arxiv:0901.3725v1 [physics.ao-ph] 23 Jan 2009 February 2007, updated January 2009 Abstract The ensemble Kalman filter EnKF) is a recursive filter

More information

Bayesian Machine Learning

Bayesian Machine Learning Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 2: Bayesian Basics https://people.orie.cornell.edu/andrew/orie6741 Cornell University August 25, 2016 1 / 17 Canonical Machine Learning

More information

Par$cle Filters Part I: Theory. Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading

Par$cle Filters Part I: Theory. Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading Par$cle Filters Part I: Theory Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading Reading July 2013 Why Data Assimila$on Predic$on Model improvement: - Parameter es$ma$on

More information

UNDERSTANDING DATA ASSIMILATION APPLICATIONS TO HIGH-LATITUDE IONOSPHERIC ELECTRODYNAMICS

UNDERSTANDING DATA ASSIMILATION APPLICATIONS TO HIGH-LATITUDE IONOSPHERIC ELECTRODYNAMICS Monday, June 27, 2011 CEDAR-GEM joint workshop: DA tutorial 1 UNDERSTANDING DATA ASSIMILATION APPLICATIONS TO HIGH-LATITUDE IONOSPHERIC ELECTRODYNAMICS Tomoko Matsuo University of Colorado, Boulder Space

More information

A Note on the Particle Filter with Posterior Gaussian Resampling

A Note on the Particle Filter with Posterior Gaussian Resampling Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and

More information

Organization. I MCMC discussion. I project talks. I Lecture.

Organization. I MCMC discussion. I project talks. I Lecture. Organization I MCMC discussion I project talks. I Lecture. Content I Uncertainty Propagation Overview I Forward-Backward with an Ensemble I Model Reduction (Intro) Uncertainty Propagation in Causal Systems

More information

EnKF Localization Techniques and Balance

EnKF Localization Techniques and Balance EnKF Localization Techniques and Balance Steven Greybush Eugenia Kalnay, Kayo Ide, Takemasa Miyoshi, and Brian Hunt Weather Chaos Meeting September 21, 2009 Data Assimilation Equation Scalar form: x a

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

Introduction to Data Assimilation

Introduction to Data Assimilation Introduction to Data Assimilation Alan O Neill Data Assimilation Research Centre University of Reading What is data assimilation? Data assimilation is the technique whereby observational data are combined

More information

Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model

Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model (& discussion on the GPLVM tech. report by Prof. N. Lawrence, 06) Andreas Damianou Department of Neuro- and Computer Science,

More information

Localization in the ensemble Kalman Filter

Localization in the ensemble Kalman Filter Department of Meteorology Localization in the ensemble Kalman Filter Ruth Elizabeth Petrie A dissertation submitted in partial fulfilment of the requirement for the degree of MSc. Atmosphere, Ocean and

More information

Large-scale Ordinal Collaborative Filtering

Large-scale Ordinal Collaborative Filtering Large-scale Ordinal Collaborative Filtering Ulrich Paquet, Blaise Thomson, and Ole Winther Microsoft Research Cambridge, University of Cambridge, Technical University of Denmark ulripa@microsoft.com,brmt2@cam.ac.uk,owi@imm.dtu.dk

More information

The Matrix Reloaded: Computations for large spatial data sets

The Matrix Reloaded: Computations for large spatial data sets The Matrix Reloaded: Computations for large spatial data sets Doug Nychka National Center for Atmospheric Research The spatial model Solving linear systems Matrix multiplication Creating sparsity Sparsity,

More information

Integrated Non-Factorized Variational Inference

Integrated Non-Factorized Variational Inference Integrated Non-Factorized Variational Inference Shaobo Han, Xuejun Liao and Lawrence Carin Duke University February 27, 2014 S. Han et al. Integrated Non-Factorized Variational Inference February 27, 2014

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Estimating Observation Impact in a Hybrid Data Assimilation System: Experiments with a Simple Model

Estimating Observation Impact in a Hybrid Data Assimilation System: Experiments with a Simple Model Estimating Observation Impact in a Hybrid Data Assimilation System: Experiments with a Simple Model Rahul Mahajan NOAA / NCEP / EMC - IMSG College Park, MD 20740, USA Acknowledgements: Ron Gelaro, Ricardo

More information

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm Overview 1 2 3 Review of the Ensemble Kalman Filter for Atmospheric Data Assimilation 6th EnKF Purpose EnKF equations localization After the 6th EnKF (2014), I decided with Prof. Zhang to summarize progress

More information

Kalman Filter and Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty

More information

Ensemble Kalman Filter based snow data assimilation

Ensemble Kalman Filter based snow data assimilation Ensemble Kalman Filter based snow data assimilation (just some ideas) FMI, Sodankylä, 4 August 2011 Jelena Bojarova Sequential update problem Non-linear state space problem Tangent-linear state space problem

More information

Background and observation error covariances Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience

Background and observation error covariances Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Background and observation error covariances Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading

More information

Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine

Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine Mike Tipping Gaussian prior Marginal prior: single α Independent α Cambridge, UK Lecture 3: Overview

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

CS-E3210 Machine Learning: Basic Principles

CS-E3210 Machine Learning: Basic Principles CS-E3210 Machine Learning: Basic Principles Lecture 4: Regression II slides by Markus Heinonen Department of Computer Science Aalto University, School of Science Autumn (Period I) 2017 1 / 61 Today s introduction

More information

Deep Poisson Factorization Machines: a factor analysis model for mapping behaviors in journalist ecosystem

Deep Poisson Factorization Machines: a factor analysis model for mapping behaviors in journalist ecosystem 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Ensemble square-root filters

Ensemble square-root filters Ensemble square-root filters MICHAEL K. TIPPETT International Research Institute for climate prediction, Palisades, New Yor JEFFREY L. ANDERSON GFDL, Princeton, New Jersy CRAIG H. BISHOP Naval Research

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Representation of inhomogeneous, non-separable covariances by sparse wavelet-transformed matrices

Representation of inhomogeneous, non-separable covariances by sparse wavelet-transformed matrices Representation of inhomogeneous, non-separable covariances by sparse wavelet-transformed matrices Andreas Rhodin, Harald Anlauf German Weather Service (DWD) Workshop on Flow-dependent aspects of data assimilation,

More information

Variational Methods in Bayesian Deconvolution

Variational Methods in Bayesian Deconvolution PHYSTAT, SLAC, Stanford, California, September 8-, Variational Methods in Bayesian Deconvolution K. Zarb Adami Cavendish Laboratory, University of Cambridge, UK This paper gives an introduction to the

More information

Data assimilation; comparison of 4D-Var and LETKF smoothers

Data assimilation; comparison of 4D-Var and LETKF smoothers Data assimilation; comparison of 4D-Var and LETKF smoothers Eugenia Kalnay and many friends University of Maryland CSCAMM DAS13 June 2013 Contents First part: Forecasting the weather - we are really getting

More information

An Ensemble Kalman Filter for NWP based on Variational Data Assimilation: VarEnKF

An Ensemble Kalman Filter for NWP based on Variational Data Assimilation: VarEnKF An Ensemble Kalman Filter for NWP based on Variational Data Assimilation: VarEnKF Blueprints for Next-Generation Data Assimilation Systems Workshop 8-10 March 2016 Mark Buehner Data Assimilation and Satellite

More information

Smoothers: Types and Benchmarks

Smoothers: Types and Benchmarks Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds

More information

EnKF-based particle filters

EnKF-based particle filters EnKF-based particle filters Jana de Wiljes, Sebastian Reich, Wilhelm Stannat, Walter Acevedo June 20, 2017 Filtering Problem Signal dx t = f (X t )dt + 2CdW t Observations dy t = h(x t )dt + R 1/2 dv t.

More information

The Matrix Reloaded: Computations for large spatial data sets

The Matrix Reloaded: Computations for large spatial data sets The Matrix Reloaded: Computations for large spatial data sets The spatial model Solving linear systems Matrix multiplication Creating sparsity Doug Nychka National Center for Atmospheric Research Sparsity,

More information

INFINITE MIXTURES OF MULTIVARIATE GAUSSIAN PROCESSES

INFINITE MIXTURES OF MULTIVARIATE GAUSSIAN PROCESSES INFINITE MIXTURES OF MULTIVARIATE GAUSSIAN PROCESSES SHILIANG SUN Department of Computer Science and Technology, East China Normal University 500 Dongchuan Road, Shanghai 20024, China E-MAIL: slsun@cs.ecnu.edu.cn,

More information

Application of the Ensemble Kalman Filter to History Matching

Application of the Ensemble Kalman Filter to History Matching Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter

More information

Ensemble Kalman Filter

Ensemble Kalman Filter Ensemble Kalman Filter Geir Evensen and Laurent Bertino Hydro Research Centre, Bergen, Norway, Nansen Environmental and Remote Sensing Center, Bergen, Norway The Ensemble Kalman Filter (EnKF) Represents

More information

Nearest Neighbor Gaussian Processes for Large Spatial Data

Nearest Neighbor Gaussian Processes for Large Spatial Data Nearest Neighbor Gaussian Processes for Large Spatial Data Abhi Datta 1, Sudipto Banerjee 2 and Andrew O. Finley 3 July 31, 2017 1 Department of Biostatistics, Bloomberg School of Public Health, Johns

More information

Bayesian Analysis for Natural Language Processing Lecture 2

Bayesian Analysis for Natural Language Processing Lecture 2 Bayesian Analysis for Natural Language Processing Lecture 2 Shay Cohen February 4, 2013 Administrativia The class has a mailing list: coms-e6998-11@cs.columbia.edu Need two volunteers for leading a discussion

More information

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. hree-dimensional variational assimilation (3D-Var) In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. Lorenc (1986) showed

More information

Revision of TR-09-25: A Hybrid Variational/Ensemble Filter Approach to Data Assimilation

Revision of TR-09-25: A Hybrid Variational/Ensemble Filter Approach to Data Assimilation Revision of TR-9-25: A Hybrid Variational/Ensemble ilter Approach to Data Assimilation Adrian Sandu 1 and Haiyan Cheng 1 Computational Science Laboratory Department of Computer Science Virginia Polytechnic

More information

Correcting biased observation model error in data assimilation

Correcting biased observation model error in data assimilation Correcting biased observation model error in data assimilation Tyrus Berry Dept. of Mathematical Sciences, GMU PSU-UMD DA Workshop June 27, 217 Joint work with John Harlim, PSU BIAS IN OBSERVATION MODELS

More information

PMR Learning as Inference

PMR Learning as Inference Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning

More information

Bayesian methods in economics and finance

Bayesian methods in economics and finance 1/26 Bayesian methods in economics and finance Linear regression: Bayesian model selection and sparsity priors Linear Regression 2/26 Linear regression Model for relationship between (several) independent

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML

More information

What do we know about EnKF?

What do we know about EnKF? What do we know about EnKF? David Kelly Kody Law Andrew Stuart Andrew Majda Xin Tong Courant Institute New York University New York, NY April 10, 2015 CAOS seminar, Courant. David Kelly (NYU) EnKF April

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Empirical Bayes, Hierarchical Bayes Mark Schmidt University of British Columbia Winter 2017 Admin Assignment 5: Due April 10. Project description on Piazza. Final details coming

More information

Variational ensemble DA at Météo-France Cliquez pour modifier le style des sous-titres du masque

Variational ensemble DA at Météo-France Cliquez pour modifier le style des sous-titres du masque Cliquez pour modifier le style du titre Variational ensemble DA at Météo-France Cliquez pour modifier le style des sous-titres du masque L. Berre, G. Desroziers, H. Varella, L. Raynaud, C. Labadie and

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

Review of Covariance Localization in Ensemble Filters

Review of Covariance Localization in Ensemble Filters NOAA Earth System Research Laboratory Review of Covariance Localization in Ensemble Filters Tom Hamill NOAA Earth System Research Lab, Boulder, CO tom.hamill@noaa.gov Canonical ensemble Kalman filter update

More information

Data assimilation in the geosciences An overview

Data assimilation in the geosciences An overview Data assimilation in the geosciences An overview Alberto Carrassi 1, Olivier Talagrand 2, Marc Bocquet 3 (1) NERSC, Bergen, Norway (2) LMD, École Normale Supérieure, IPSL, France (3) CEREA, joint lab École

More information

Nonparametric Drift Estimation for Stochastic Differential Equations

Nonparametric Drift Estimation for Stochastic Differential Equations Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,

More information

Recent Advances in Bayesian Inference Techniques

Recent Advances in Bayesian Inference Techniques Recent Advances in Bayesian Inference Techniques Christopher M. Bishop Microsoft Research, Cambridge, U.K. research.microsoft.com/~cmbishop SIAM Conference on Data Mining, April 2004 Abstract Bayesian

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

STA414/2104 Statistical Methods for Machine Learning II

STA414/2104 Statistical Methods for Machine Learning II STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements

More information

Quantifying observation error correlations in remotely sensed data

Quantifying observation error correlations in remotely sensed data Quantifying observation error correlations in remotely sensed data Conference or Workshop Item Published Version Presentation slides Stewart, L., Cameron, J., Dance, S. L., English, S., Eyre, J. and Nichols,

More information

Probabilistic Reasoning in Deep Learning

Probabilistic Reasoning in Deep Learning Probabilistic Reasoning in Deep Learning Dr Konstantina Palla, PhD palla@stats.ox.ac.uk September 2017 Deep Learning Indaba, Johannesburgh Konstantina Palla 1 / 39 OVERVIEW OF THE TALK Basics of Bayesian

More information