Convergence of the Ensemble Kalman Filter in Hilbert Space
|
|
- Franklin O’Brien’
- 5 years ago
- Views:
Transcription
1 Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based on joint work with Loren Cobb and Jonathan Beezley Supported by SF grant EGS and Fondation STAE project ADTAO. Toulouse, June 14, 2012
2 Data assimilation - update cycle Goal: Inject data into a running model update advance in time update forecast analysis forecast analysis... data data Kalman filter used already in Apollo 11 navigation, now in GPS
3 Bayesian approach discrete time filtering = sequential statistical estimation Model state is a random variable U with probability density p (u) Data + measurement error enter as data likelihood p (d u) Probability densities updated by Bayes theorem: p (u) p (d u) p f (u) means proportional Bayesian update cycle time k 1 analysis U (k 1) p (k 1) (u) advance model time k forecast U (k),f p (k 1),f (u) Bayesian update p (u) p (d u) p f (u) time k analysis U (k) p (k) (u) data likelihood p (d u)
4 Kalman filter Model state U is represented by its mean and covariance. Assume that the forecast is Gaussian, U f (m f, Q f ), and the data likelihood is also Gaussian, p (d u) e (Hu d)t R 1 (Hu d)/2. Then the analysis is also Gaussian, U (m, Q), given by m = m f + K(d Hm f ), Q = (I KH)Q f where K = Q f H T (HQ f H T + R) 1 is the gain matrix
5 Kalman filter Hard to advance the covariance matrix when the model is nonlinear Modifications of the code needed tangential and adjoint models eeds to maintain the covariance matrix - can t do for large problems But the dimension of the state space is large - discretizations of PDEs
6 Ensemble Kalman filter (EnKF) represent model state by an ensemble, replace the forecast covariance by sample covariance computed from the forecast ensemble Q = 1 ( i=1 f X i X ) ( f X i X ) T, X = 1 i=1 X f i First idea (Evensen 1994): apply Kalman formula to the ensemble members: X i = X f i + K (d HX f i ), K = Q f HT (HQ f HT + R) 1 But the analysis covariance was wrong too small even if Q = Q were exact Fix: perturb data by sampling from data error distribution: replace d by D i = d + E i, i.i.d. E i (0, R) (Burgers, Evensen, van Leeuwen, 1998)
7 The EnKF with data perturbation is correct if the covariance is Lemma. Let U f (m f, Q f ), D = d + E, E (0, R) and U=U f +K(D-HU f ), K=Q f H T (HQ f H T +R) 1. Then U (m, Q) (i.e. U has the correct analysis distribution from the Bayes theorem and the Kalman filter). Proof. Computation in Burgers, Evensen, van Leeuwen, Corollary. If a forecast ensemble U f i is a sample from the forecast distribution, and the exact covariance is used, then the analysis ensemble U i is a sample from the analysis distribution.
8 EnKF properties The model is needed only as a black box. The EnKF was derived for Gaussian distributions but the algorithm does not depend on this. So it is often used for nonlinear models and non-gaussian distributions anyway. The EnKF was designed so that the ensemble is a sample from the correct analysis distribution if - the forecast ensemble is i.i.d. and Gaussian - the covariance matrix is exact But - the sample covariance matrix is a random element - the ensemble is not independent: sample covariance ties it together - the ensemble is not Gaussian: the update is a nonlinear mapping
9 Convergence of EnKF in the large ensemble limit Law of large numbers to guarantee that the EnKF gives correct results for large ensembles, in the gaussian case: Le Gland et al. (2011) Mandel et al (2011). Similar arguments. Here we simplify the approach from Mandel et al (2011) and show that it carries over to infinitely dimensional Hilbert space. Infinite dimensional case is important as the limit case of high dimensional state space.
10 Convergence analysis overview For a large ensemble - the sample covariance converges to the exact covariance - the ensemble is asymptotically i.i.d. - the ensemble is asymptotically Gaussian - the ensemble members converge to the correct distribution How? - the ensemble is exchangeable: plays the role of independent - the ensemble is a-priori bounded in L p : plays the role of Gaussian - use the weak law of large numbers for the sample covariance - the ensemble converges to a Gaussian i.i.d. ensemble obtained with the exact covariance matrix instead of the sample covariance ote: So far, we are still in a fixed, finite dimension. We ll see later what survives in the infinitely dimensional case.
11 Tools otation: X is the standard Euclidean norm in R n. If X : Ω R n is a random element, X p = E ( X p ) 1/p is its stochastic L p norm, 1 p <. X j P X (in probability) if ε > 0 : P ( Xj X ε ) 0. Continuous mapping theorem: if X k g(x j ) P g (X). P X and g is continuous, then L p convergence implies convergence in probability.
12 Tools Uniform integrability: if X j X k X in L q 1 q < p. P X and {Xj } is bounded in L p, then Weak law of large numbers: if X i L 2 are i.i.d., then 1 i=1 X i E (X 1 ) in L 2 as. Set of random elements {X 1,..., X } is exchangeable if their joint distribution is invariant to a permutation of their indices. A set that is i.i.d. is exchangeable. Conversely, exchangeable random elements are identically distributed, but not necessarily independent.
13 Convergence of the EnKF to the Kalman filter Generate the initial ensemble X (0) = [X(0) 1,..., X(0) ] i.i.d. and Gaussian. Run EnKF, get ensembles [X (k) 1,..., X(k) ],advance in time by linear models: X k+1,f i = M k Xi k + f k. For theoretical purposes, define the reference ensembles [U (k) 1,..., U (k) by U (0) = X(0) (k) and U by the EnKF, except with the exact covariance of the filtering distribution instead of the sample covariance. We already know that U (k) is a sample (i.i.d.) from the Gaussian filtering distribution (as it would be delivered by the Kalman filter). We will show by induction that X (k) i U (k) i in Lp for all 1 p < as, for all analysis cycles k. In which sense? We need this for all i = 1,..., but U i change and changes too. ]
14 Exchangeability of EnKF ensembles Lemma. All ensembles [X (k) 1,..., X(k) ] generated by the EnKF are exchangeable. Proof: The initial ensemble [X (0) 1,..., X(0) (0) ] = [U 1,..., U (0) ] is i.i.d., thus exchangeable, and the analysis step is permutation invariant. Corollary The random elements X(k) 1 U (k) 1,..., X(k) U (k) are also exchangeable. (Recall that U i are obtained using the exact covariance from the Kalman filter rather than the sample covariance.) Corollary. [ X (k) 1 U (k) 1,..., X(k) U (k) ] so we only need to consider the convergence are identically distributed, X(k) 1 U (k) 1 0. p
15 Estimates for the covariance Write the -th sample covariance of [X 1, X 2,...], as C (X) = 1 i=1 X i X T i 1 X i i=1 1 T X i i=1 Bound If X i L 2p are identically distributed, then C (X) p X 1 X1 T + X p 1 2 p X 1 2 2p + X 1 2 p Continuity of the sample covariance: If [X i, U i ] L 4 are identically distributed, then C (X) C (U) 2 8 X 1 U 1 4 X U 1 2 4
16 Estimates for the covariance L 2 law of large numbers for the sample covariance: If U i L 4, i = 1, 2,... are i.i.d., then C (X) C (U) 2 0,. Proof. X i X T i L 2 are i.i.d. and the weak law of large numbers applies. Corollary If [X i, U i ] L 4 are identically distributed, U i are independent, and X 1 U as, then C (X) Cov (U 1 ) in L 2, thus also. C (X) P Cov (U 1 )
17 A-priori L p bounds on EnKF ensembles Lemma. All EnKF ensembles [X (k) 1,..., X(k) ] are bounded in Lp, X (k) 1 p C k,p with constants independent of the ensemble size, for all 1 p <. Proof: By induction over step index k. Recall that X (k),f X (k) K (k) = X(k),f = Q(k),f + K (k) (D(k) H(k) X (k),f ) H(k)T (H (k) Q (k),f H(k)T + R (k) ) 1 Bound the sample covariance Q (k),f p from the L 2p bound on Bound the matrix norm (H (k) Q (k),f H(k)T + R (k) ) 1 const from R (k) > 0 (positive definite).
18 Convergence of the EnKF is now simple: Recall: X (k) (k) i = EnKF ensemble, U i = reference i.i.d. ensemble from exact covariances. By induction over step index k: X (k),f 1 U (k),f 1 in L p as, 1 p < ) ( P Q (k),f = Cov Sample covariance Q (k),f = C ( X (k),f Continuous mapping theorem = Kalman gain Q (k) P U (k) 1. X (k) 1 Linearly advancing in time: X (k+1),f P 1 U (k+1),f 1 A-priori L p bound and uniform integrability = X (k+1),f in L p 1 p < 1 U (k),f 1 ) P K (k) and U (k+1),f 1
19 Towards high-dimension asymptotics We have proved convergence in large sample limit for arbitrary but fixed state space dimension. This says nothing about the speed of convergence as the state dimension increases. Curse of dimensionality: slow convergence in high dimension. Why and when is it happening? With arbitrary probability distributions, sure. But probability distributions in practice are not arbitrary: the state is a discretization of a random function. Random functions are random elements with values in infinitely dimensional spaces. State dimension is just from the mesh size. The smoother the functions, the faster the EnKF convergence. As a step towards convergence uniform in state space dimension, look at the infinitely dimensional case first - just like in numerical analysis where we study the PDE first a PDE and only then its numerical approximation
20 Bayes theorem in an infinitely dimensional Hilbert space Model state is a probability measure µ on a separable Hilbert space V with inner product,. Avoid densities: Bayes theorem p (u) p (d u) p f (u) becomes A µ (A) = p (d u) du(µ f) V p (d u) du(µ µ f -measurable A f) eed H p (d u) du(µ f) > 0 for the Bayes theorem to work. Data likelihood p (d u) > 0 for all u from some open set in V is enough. Gaussian data likelihood: ( p (d u) = exp 1 R 1/2 (Hu d) 2 2) if Hu d Im(R 1/2 ), 0 otherwise. In particular, R = I works just fine. But R cannot be covariance of a probability measure if the data space is infinitely dimensional.
21 Probability on an infinitely dimensional Hilbert space The mean of a random element X is the weak integral: E (X) V, v, E (X) = E ( v, X ) v V. Tensor product of vectors x, y V was xy T, now it is the bounded linear operator x y [V ], (x y) v = x y, v v V Covariance of a random element X becomes Cov (X) = E ((X E (X)) (X E (X))) = E (X X) E (X) E (X) Covariance of a random elemement on V must be a compact operator of trace class: its eigenvalues λ n must satisfy n=1 λ n <.
22 Curse of dimensionality? ot for probability measures! 10 2 SIR, = 10, m = EnKF, = 10, m = 25 filter error filter error model size model size Constant covariance eigenvalues λ n = 1 and the inverse law λ n = 1/n are not probability measures in the limit because n=1 λ n =. Inverse square law λ n = 1/n 2 gives a probability measure because n=1 λ n <. m=25 uniformly sampled data points from 1D state, =10 ensemble members. From Beezley (2009), Fig. 4.7.
23 The L 2 and the weak laws of large numbers The definition of L p (Ω, V ) = {u : E ( u p < )} and u p carry over. The L 2 law of large numbers for sample mean E = 1 i=1 X i carries over: Let X k L 2 (Ω, V ) be i.i.d., then E E (X 1 ) 2 1 X 1 E (X 1 ) 2 2 X 1 2, and consequently E P E (X1 ). But L p laws of large numbers do not generally hold on Banach spaces (in fact define Rademacher type p spaces), and the space [V ] of all bounded linear operators on a Hilbert space is not a Hilbert space but a quite general Banach space.
24 Sample covariance The definition carries over from finite dimension: C ([X 1,..., X ]) = 1 i=1 X i X i 1 X i i=1 1 X i i=1 But we do not have the L 2 (Ω, [V ]) law of large numbers for the sample covariance. Instead use L 2 (Ω, V HS ) with V HS the space of all Hilbert- Schmidt operators on V and the norm A 2 HS = n=1 Ae n, Ae n where {e n } is any complete orthonormal sequence in V. ow the L 2 law of large numbers applies in the Hilbert space V HS, and A [V ] A HS, so we recover C ([X 1,..., X ]) Cov X 1 in L 2 (Ω, [V ]).
25 Convergence of EnKF in a Hilbert space A-priori L p bounds on EnKF ensembles: We need only (H(k) Q (k),f H(k) + R (k) ) 1 [V ] < const then the bound caries over. This is true when the data error covariance has spectrum bounded below, R (k) ci > 0 - such as R (k) = I. This was good for the Bayes theorem already. With these few changes the whole proof carries over, avoiding entry-byentry arguments. But not all is good - the formulation of the EnKF algorithm now involves data perturbation by white noise. And white noise is not a random element with values in the Hilbert space, at least not in the usual sense.
26 EnKF in a Hilbert space with white noise data perturbation A weak random element (Balakrishnan 1976) has finitely additive not necessarily σ-additive distribution, which allows identity covariance (white noise). If the covariance of a gaussian weak random element is of trace class, the distribution is in fact σ-additive and the weak random element is a random element in the usual sense. So let D = d + E, E is weak random element (0, R), U f ( m, Q f ) and U = U f + K(D HU), with K = QH (H QH + R) 1 = the analysis U is also a weak random variable with the analysis distribution, (m, Q), Q = (I KH)Q trace class = the distribution of the analysis U is a gaussian probability measure.
27 Convergence of the EnKF is now simple: Recall: X (k) (k) i = EnKF ensemble, U i = reference i.i.d. ensemble from exact covariances. By induction over step index k: X (k),f 1 U (k),f 1 in L p as, 1 p < ) ( P Q (k),f = Cov Sample covariance Q (k),f = C ( X (k),f Continuous mapping theorem = Kalman gain Q (k) P U (k) 1. X (k) 1 Linearly advancing in time: X (k+1),f P 1 U (k+1),f 1 A-priori L p bound and uniform integrability = X (k+1),f in L p 1 p < 1 U (k),f 1 ) P K (k) and U (k+1),f 1
28 Conclusion With a bit of care, a simplified proof in finite dimension caries over to infinitely dimensional separable Hilbert space. White noise data error is good. White noise state space is bad. Computational experiments confirm that EnKF converges uniformly for high-dimensional distributions that approximate a Gaussian measure on Hilbert space. (J. Beezley, Ph.D. thesis, 2009). EnKF for distributions with slowly decaying eigenvalues converges very slowly and requires huge ensembles. The same is seen for particle filters... in spite of the fact that particle filters for problems without a structure fail in high dimension. Square root EnKF has no data perturbation - but the ensemble may not be exchangeable. ext: uniform convergence for high-dimensional problems that approximate an infinitely dimensional one... just like numerical PDEs?
29 References [1] A. V. Balakrishnan. Applied functional analysis. Springer-Verlag, ew York, Applications of Mathematics, o. 3. [2] Jonathan D. Beezley. High-Dimensional Data Assimilation and Morphing Ensemble Kalman Filters with Applications in Wildfire Modeling. PhD thesis, University of Colorado Denver, [3] Gerrit Burgers, Peter Jan van Leeuwen, and Geir Evensen. Analysis scheme in the ensemble Kalman filter. Monthly Weather Review, 126: , [4] Giuseppe Da Prato. An introduction to infinite-dimensional analysis. Springer-Verlag, Berlin, [5] Geir Evensen. Sequential data assimilation with nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. Journal of Geophysical Research, 99 (C5)(10): , [6] François Le Gland, Valérie Monbet, and Vu-Duc Tran. Large sample asymptotics for the ensemble Kalman filter. In Dan Crisan and Boris Rozovskii, editors, The Oxford Handbook of onlinear Filtering. Oxford University Press, IRIA Report 7014, August [7] Michel Ledoux and Michel Talagrand. Probability in Banach spaces. Ergebnisse der Mathematik und ihrer Grenzgebiete (3), Vol. 23. Springer-Verlag, Berlin, [8] Jan Mandel, Loren Cobb, and Jonathan D. Beezley. On the convergence of the ensemble Kalman filter. Applications of Mathematics, 56: , arxiv: , January 2009.
Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit
Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation
More informationCONVERGENCE OF THE SQUARE ROOT ENSEMBLE KALMAN FILTER IN THE LARGE ENSEMBLE LIMIT
COVERGECE OF THE SQUARE ROOT ESEMBLE KALMA FILTER I THE LARGE ESEMBLE LIMIT EVA KWIATKOWSKI AD JA MADEL Abstract. Ensemble filters implement sequential Bayesian estimation by representing the probability
More informationarxiv: v1 [physics.ao-ph] 23 Jan 2009
A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel arxiv:0901.3725v1 [physics.ao-ph] 23 Jan 2009 February 2007, updated January 2009 Abstract The ensemble Kalman filter EnKF) is a recursive filter
More informationSmoothers: Types and Benchmarks
Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds
More informationConvergence of the Square Root Ensemble Kalman Filter in the Large Ensemble Limit
SIAM/ASA J. UCERTAITY QUATIFICATIO Vol. xx, pp. x x c xxxx Society for Industrial and Applied Mathematics Convergence of the Square Root Ensemble Kalman Filter in the Large Ensemble Limit Evan Kwiatkowski
More informationSpectral and morphing ensemble Kalman filters
Spectral and morphing ensemble Kalman filters Department of Mathematical and Statistical Sciences University of Colorado Denver 91st American Meteorological Society Annual Meeting Seattle, WA, January
More informationStochastic Collocation Methods for Polynomial Chaos: Analysis and Applications
Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationMorphing ensemble Kalman filter
Morphing ensemble Kalman filter and applications Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Supported by NSF grants CNS-0623983
More informationLagrangian Data Assimilation and Its Application to Geophysical Fluid Flows
Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Laura Slivinski June, 3 Laura Slivinski (Brown University) Lagrangian Data Assimilation June, 3 / 3 Data Assimilation Setup:
More informationA Brief Tutorial on the Ensemble Kalman Filter
University of Colorado at Denver and Health Sciences Center A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel February 2007 UCDHSC/CCM Report No. 242 CENTER FOR COMPUTATIONAL MATHEMATICS REPORTS
More informationHIGH-DIMENSIONAL DATA ASSIMILATION AND MORPHING ENSEMBLE KALMAN FILTERS WITH APPLICATIONS IN WILDFIRE MODELING. Jonathan D.
HIGH-DIMENSIONAL DATA ASSIMILATION AND MORPHING ENSEMBLE KALMAN FILTERS WITH APPLICATIONS IN WILDFIRE MODELING by Jonathan D. Beezley Bachelor of Science, University of Nebraska Lincoln, 2001 Master of
More informationAnalysis Scheme in the Ensemble Kalman Filter
JUNE 1998 BURGERS ET AL. 1719 Analysis Scheme in the Ensemble Kalman Filter GERRIT BURGERS Royal Netherlands Meteorological Institute, De Bilt, the Netherlands PETER JAN VAN LEEUWEN Institute or Marine
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationA Stochastic Collocation based. for Data Assimilation
A Stochastic Collocation based Kalman Filter (SCKF) for Data Assimilation Lingzao Zeng and Dongxiao Zhang University of Southern California August 11, 2009 Los Angeles Outline Introduction SCKF Algorithm
More informationLagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC
Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background
More informationFundamentals of Data Assimilation
National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)
More informationSpectral and morphing ensemble Kalman filters
Spectral and morphing ensemble Kalman filters Department of Mathematical and Statistical Sciences University of Colorado Denver 91st American Meteorological Society Annual Meeting Seattle, WA, January
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationPerformance of ensemble Kalman filters with small ensembles
Performance of ensemble Kalman filters with small ensembles Xin T Tong Joint work with Andrew J. Majda National University of Singapore Sunday 28 th May, 2017 X.Tong EnKF performance 1 / 31 Content Ensemble
More informationSPECTRAL AND MORPHING ENSEMBLE KALMAN FILTERS AND APPLICATIONS
SPECTRAL AND MORPHING ENSEMBLE KALMAN FILTERS AND APPLICATIONS Jan Mandel, Jonathan D. Beezley, Loren Cobb, Ashok Krishnamurthy, University of Colorado Denver Adam K. Kochanski University of Utah Krystof
More informationSeminar: Data Assimilation
Seminar: Data Assimilation Jonas Latz, Elisabeth Ullmann Chair of Numerical Mathematics (M2) Technical University of Munich Jonas Latz, Elisabeth Ullmann (TUM) Data Assimilation 1 / 28 Prerequisites Bachelor:
More informationErgodicity in data assimilation methods
Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationNew Fast Kalman filter method
New Fast Kalman filter method Hojat Ghorbanidehno, Hee Sun Lee 1. Introduction Data assimilation methods combine dynamical models of a system with typically noisy observations to obtain estimates of the
More informationNonlinear error dynamics for cycled data assimilation methods
Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.
More information4. DATA ASSIMILATION FUNDAMENTALS
4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic
More informationRobust Ensemble Filtering With Improved Storm Surge Forecasting
Robust Ensemble Filtering With Improved Storm Surge Forecasting U. Altaf, T. Buttler, X. Luo, C. Dawson, T. Mao, I.Hoteit Meteo France, Toulouse, Nov 13, 2012 Project Ensemble data assimilation for storm
More informationEnsemble Kalman Filter
Ensemble Kalman Filter Geir Evensen and Laurent Bertino Hydro Research Centre, Bergen, Norway, Nansen Environmental and Remote Sensing Center, Bergen, Norway The Ensemble Kalman Filter (EnKF) Represents
More informationEfficient Implementation of the Ensemble Kalman Filter
University of Colorado at Denver and Health Sciences Center Efficient Implementation of the Ensemble Kalman Filter Jan Mandel May 2006 UCDHSC/CCM Report No. 231 CENTER FOR COMPUTATIONAL MATHEMATICS REPORTS
More informationSequential Monte Carlo Samplers for Applications in High Dimensions
Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex
More informationPar$cle Filters Part I: Theory. Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading
Par$cle Filters Part I: Theory Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading Reading July 2013 Why Data Assimila$on Predic$on Model improvement: - Parameter es$ma$on
More information(Extended) Kalman Filter
(Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system
More informationEnKF and filter divergence
EnKF and filter divergence David Kelly Andrew Stuart Kody Law Courant Institute New York University New York, NY dtbkelly.com December 12, 2014 Applied and computational mathematics seminar, NIST. David
More informationParticle filters, the optimal proposal and high-dimensional systems
Particle filters, the optimal proposal and high-dimensional systems Chris Snyder National Center for Atmospheric Research Boulder, Colorado 837, United States chriss@ucar.edu 1 Introduction Particle filters
More informationThe Ensemble Kalman Filter:
p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen 23, Ocean Dynamics, Vol 53, No 4 p.2 The Ensemble
More informationBayesian Inverse problem, Data assimilation and Localization
Bayesian Inverse problem, Data assimilation and Localization Xin T Tong National University of Singapore ICIP, Singapore 2018 X.Tong Localization 1 / 37 Content What is Bayesian inverse problem? What is
More informationThe Ensemble Kalman Filter:
p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen, Ocean Dynamics, Vol 5, No p. The Ensemble
More informationA Spectral Approach to Linear Bayesian Updating
A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT
More informationEnsemble square-root filters
Ensemble square-root filters MICHAEL K. TIPPETT International Research Institute for climate prediction, Palisades, New Yor JEFFREY L. ANDERSON GFDL, Princeton, New Jersy CRAIG H. BISHOP Naval Research
More informationFundamentals of Data Assimila1on
2015 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO August 11-14, 2015 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University
More informationLecture 7: Optimal Smoothing
Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother
More informationData assimilation in high dimensions
Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February
More informationWhat do we know about EnKF?
What do we know about EnKF? David Kelly Kody Law Andrew Stuart Andrew Majda Xin Tong Courant Institute New York University New York, NY April 10, 2015 CAOS seminar, Courant. David Kelly (NYU) EnKF April
More information4 Derivations of the Discrete-Time Kalman Filter
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time
More informationThe Kalman Filter ImPr Talk
The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman
More informationData assimilation in high dimensions
Data assimilation in high dimensions David Kelly Kody Law Andy Majda Andrew Stuart Xin Tong Courant Institute New York University New York NY www.dtbkelly.com February 3, 2016 DPMMS, University of Cambridge
More informationKalman Filter and Ensemble Kalman Filter
Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty
More informationData assimilation with and without a model
Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,
More informationGaussian Filtering Strategies for Nonlinear Systems
Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors
More informationA Note on the Particle Filter with Posterior Gaussian Resampling
Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and
More informationData assimilation; comparison of 4D-Var and LETKF smoothers
Data assimilation; comparison of 4D-Var and LETKF smoothers Eugenia Kalnay and many friends University of Maryland CSCAMM DAS13 June 2013 Contents First part: Forecasting the weather - we are really getting
More informationGaussian Process Approximations of Stochastic Differential Equations
Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter
More informationAdaptive ensemble Kalman filtering of nonlinear systems
Adaptive ensemble Kalman filtering of nonlinear systems Tyrus Berry George Mason University June 12, 213 : Problem Setup We consider a system of the form: x k+1 = f (x k ) + ω k+1 ω N (, Q) y k+1 = h(x
More informationConvergence Rates of Kernel Quadrature Rules
Convergence Rates of Kernel Quadrature Rules Francis Bach INRIA - Ecole Normale Supérieure, Paris, France ÉCOLE NORMALE SUPÉRIEURE NIPS workshop on probabilistic integration - Dec. 2015 Outline Introduction
More informationAsynchronous data assimilation
Ensemble Kalman Filter, lecture 2 Asynchronous data assimilation Pavel Sakov Nansen Environmental and Remote Sensing Center, Norway This talk has been prepared in the course of evita-enkf project funded
More informationA new Hierarchical Bayes approach to ensemble-variational data assimilation
A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander
More informationCOVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE
Communications on Stochastic Analysis Vol. 4, No. 3 (21) 299-39 Serials Publications www.serialspublications.com COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE NICOLAS PRIVAULT
More informationLagrangian Data Assimilation and its Applications to Geophysical Fluid Flows
Lagrangian Data Assimilation and its Applications to Geophysical Fluid Flows by Laura Slivinski B.S., University of Maryland; College Park, MD, 2009 Sc.M, Brown University; Providence, RI, 2010 A dissertation
More informationData assimilation with and without a model
Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU
More informationKarhunen-Loève decomposition of Gaussian measures on Banach spaces
Karhunen-Loève decomposition of Gaussian measures on Banach spaces Jean-Charles Croix jean-charles.croix@emse.fr Génie Mathématique et Industriel (GMI) First workshop on Gaussian processes at Saint-Etienne
More informationAssimilation des Observations et Traitement des Incertitudes en Météorologie
Assimilation des Observations et Traitement des Incertitudes en Météorologie Olivier Talagrand Laboratoire de Météorologie Dynamique, Paris 4èmes Rencontres Météo/MathAppli Météo-France, Toulouse, 25 Mars
More informationData assimilation Schrödinger s perspective
Data assimilation Schrödinger s perspective Sebastian Reich (www.sfb1294.de) Universität Potsdam/ University of Reading IMS NUS, August 3, 218 Universität Potsdam/ University of Reading 1 Core components
More informationStrong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term
1 Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term Enrico Priola Torino (Italy) Joint work with G. Da Prato, F. Flandoli and M. Röckner Stochastic Processes
More informationThe Rademacher Cotype of Operators from l N
The Rademacher Cotype of Operators from l N SJ Montgomery-Smith Department of Mathematics, University of Missouri, Columbia, MO 65 M Talagrand Department of Mathematics, The Ohio State University, 3 W
More informationApplication of the Ensemble Kalman Filter to History Matching
Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization
More informationAlternative Characterizations of Markov Processes
Chapter 10 Alternative Characterizations of Markov Processes This lecture introduces two ways of characterizing Markov processes other than through their transition probabilities. Section 10.1 describes
More informationShort tutorial on data assimilation
Mitglied der Helmholtz-Gemeinschaft Short tutorial on data assimilation 23 June 2015 Wolfgang Kurtz & Harrie-Jan Hendricks Franssen Institute of Bio- and Geosciences IBG-3 (Agrosphere), Forschungszentrum
More informationLocal Ensemble Transform Kalman Filter
Local Ensemble Transform Kalman Filter Brian Hunt 11 June 2013 Review of Notation Forecast model: a known function M on a vector space of model states. Truth: an unknown sequence {x n } of model states
More informationLarge deviations and averaging for systems of slow fast stochastic reaction diffusion equations.
Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics
More informationData assimilation in the geosciences An overview
Data assimilation in the geosciences An overview Alberto Carrassi 1, Olivier Talagrand 2, Marc Bocquet 3 (1) NERSC, Bergen, Norway (2) LMD, École Normale Supérieure, IPSL, France (3) CEREA, joint lab École
More informationPREDICTOR-CORRECTOR AND MORPHING ENSEMBLE FILTERS FOR THE ASSIMILATION OF SPARSE DATA INTO HIGH-DIMENSIONAL NONLINEAR SYSTEMS
PREDICTOR-CORRECTOR AND MORPHING ENSEMBLE FILTERS FOR THE ASSIMILATION OF SPARSE DATA INTO HIGH-DIMENSIONAL NONLINEAR SYSTEMS Jan Mandel and Jonathan D. Beezley University of Colorado at Denver and Health
More informationA data-driven method for improving the correlation estimation in serial ensemble Kalman filter
A data-driven method for improving the correlation estimation in serial ensemble Kalman filter Michèle De La Chevrotière, 1 John Harlim 2 1 Department of Mathematics, Penn State University, 2 Department
More informationDynamic System Identification using HDMR-Bayesian Technique
Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in
More information9 Brownian Motion: Construction
9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of
More informationPractical Aspects of Ensemble-based Kalman Filters
Practical Aspects of Ensemble-based Kalman Filters Lars Nerger Alfred Wegener Institute for Polar and Marine Research Bremerhaven, Germany and Bremen Supercomputing Competence Center BremHLR Bremen, Germany
More informationLinear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters
Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter
More informationIterative Solution of a Matrix Riccati Equation Arising in Stochastic Control
Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the
More informationEnsemble Kalman filters, Sequential Importance Resampling and beyond
Ensemble Kalman filters, Sequential Importance Resampling and beyond Peter Jan van Leeuwen Institute for Marine and Atmospheric research Utrecht (IMAU) Utrecht University, P.O.Box 80005, 3508 TA Utrecht,
More informationOrganization. I MCMC discussion. I project talks. I Lecture.
Organization I MCMC discussion I project talks. I Lecture. Content I Uncertainty Propagation Overview I Forward-Backward with an Ensemble I Model Reduction (Intro) Uncertainty Propagation in Causal Systems
More informationMethods of Data Assimilation and Comparisons for Lagrangian Data
Methods of Data Assimilation and Comparisons for Lagrangian Data Chris Jones, Warwick and UNC-CH Kayo Ide, UCLA Andrew Stuart, Jochen Voss, Warwick Guillaume Vernieres, UNC-CH Amarjit Budiraja, UNC-CH
More informationFundamentals of Data Assimila1on
014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University
More informationGaussian Process Approximations of Stochastic Differential Equations
Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run
More informationAlmost Invariant Half-Spaces of Operators on Banach Spaces
Integr. equ. oper. theory Online First c 2009 Birkhäuser Verlag Basel/Switzerland DOI 10.1007/s00020-009-1708-8 Integral Equations and Operator Theory Almost Invariant Half-Spaces of Operators on Banach
More informationTime Series Prediction by Kalman Smoother with Cross-Validated Noise Density
Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract
More informationLocal Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data
Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data John Harlim and Brian R. Hunt Department of Mathematics and Institute for Physical Science and Technology University
More informationAddressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF
Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF Javier Amezcua, Dr. Kayo Ide, Dr. Eugenia Kalnay 1 Outline
More informationLecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations
Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model
More informationSpectral Ensemble Kalman Filters
Spectral Ensemble Kalman Filters Jan Mandel 12, Ivan Kasanický 2, Martin Vejmelka 2, Kryštof Eben 2, Viktor Fugĺık 2, Marie Turčičová 2, Jaroslav Resler 2, and Pavel Juruš 2 1 University of Colorado Denver
More informationConstrained State Estimation Using the Unscented Kalman Filter
16th Mediterranean Conference on Control and Automation Congress Centre, Ajaccio, France June 25-27, 28 Constrained State Estimation Using the Unscented Kalman Filter Rambabu Kandepu, Lars Imsland and
More informationIntroduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed
Introduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed Jeffrey Anderson, Tim Hoar, Nancy Collins NCAR Institute for Math Applied to Geophysics pg 1 What is Data Assimilation?
More informationCIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions
CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the
More informationState Estimation using Moving Horizon Estimation and Particle Filtering
State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /
More informationLecture 4: Extended Kalman filter and Statistically Linearized Filter
Lecture 4: Extended Kalman filter and Statistically Linearized Filter Department of Biomedical Engineering and Computational Science Aalto University February 17, 2011 Contents 1 Overview of EKF 2 Linear
More informationAccepted in Tellus A 2 October, *Correspondence
1 An Adaptive Covariance Inflation Error Correction Algorithm for Ensemble Filters Jeffrey L. Anderson * NCAR Data Assimilation Research Section P.O. Box 3000 Boulder, CO 80307-3000 USA Accepted in Tellus
More informationVector measures of bounded γ-variation and stochastic integrals
Vector measures of bounded γ-variation and stochastic integrals Jan van Neerven and Lutz Weis bstract. We introduce the class of vector measures of bounded γ-variation and study its relationship with vector-valued
More informationForecasting and data assimilation
Supported by the National Science Foundation DMS Forecasting and data assimilation Outline Numerical models Kalman Filter Ensembles Douglas Nychka, Thomas Bengtsson, Chris Snyder Geophysical Statistics
More informationParticle Filters. Outline
Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical
More information9 Multi-Model State Estimation
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State
More information