The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

Size: px
Start display at page:

Download "The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance"

Transcription

1 The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading July of 32

2 Outline Introduction Derivation of the Kalman Filter Kalman Filter properties Filter divergence Conclusions References 2 of 32

3 State estimation feedback loop 3 of 32

4 The Model Consider the discrete, linear system, where x k+1 = M k x k + w k, k = 0, 1, 2,..., (1) 4 of 32

5 The Model Consider the discrete, linear system, x k+1 = M k x k + w k, k = 0, 1, 2,..., (1) where x k R n is the state vector at time t k 4 of 32

6 The Model Consider the discrete, linear system, where x k+1 = M k x k + w k, k = 0, 1, 2,..., (1) x k R n is the state vector at time t k M k R n n is the state transition matrix (mapping from time t k to t k+1 ) or model 4 of 32

7 The Model Consider the discrete, linear system, where x k+1 = M k x k + w k, k = 0, 1, 2,..., (1) x k R n is the state vector at time t k M k R n n is the state transition matrix (mapping from time t k to t k+1 ) or model {w k R n ; k = 0, 1, 2,...} is a white, Gaussian sequence, with w k N(0, Q k ), often referred to as model error 4 of 32

8 The Model Consider the discrete, linear system, where x k+1 = M k x k + w k, k = 0, 1, 2,..., (1) x k R n is the state vector at time t k M k R n n is the state transition matrix (mapping from time t k to t k+1 ) or model {w k R n ; k = 0, 1, 2,...} is a white, Gaussian sequence, with w k N(0, Q k ), often referred to as model error Q k R n n is a symmetric positive definite covariance matrix (known as the model error covariance matrix). 4 of 32

9 The Model Consider the discrete, linear system, where x k+1 = M k x k + w k, k = 0, 1, 2,..., (1) x k R n is the state vector at time t k M k R n n is the state transition matrix (mapping from time t k to t k+1 ) or model {w k R n ; k = 0, 1, 2,...} is a white, Gaussian sequence, with w k N(0, Q k ), often referred to as model error Q k R n n is a symmetric positive definite covariance matrix (known as the model error covariance matrix). 4 of 32

10 The Observations We also have discrete, linear observations that satisfy y k = H k x k + v k, k = 1, 2, 3,..., (2) where 5 of 32

11 The Observations We also have discrete, linear observations that satisfy y k = H k x k + v k, k = 1, 2, 3,..., (2) where y k R p is the vector of actual measurements or observations at time t k 5 of 32

12 The Observations We also have discrete, linear observations that satisfy y k = H k x k + v k, k = 1, 2, 3,..., (2) where y k R p is the vector of actual measurements or observations at time t k H k R n p is the observation operator. Note that this is not in general a square matrix. 5 of 32

13 The Observations We also have discrete, linear observations that satisfy y k = H k x k + v k, k = 1, 2, 3,..., (2) where y k R p is the vector of actual measurements or observations at time t k H k R n p is the observation operator. Note that this is not in general a square matrix. {v k R p ; k = 1, 2,...} is a white, Gaussian sequence, with v k N(0, R k ), often referred to as observation error. 5 of 32

14 The Observations We also have discrete, linear observations that satisfy y k = H k x k + v k, k = 1, 2, 3,..., (2) where y k R p is the vector of actual measurements or observations at time t k H k R n p is the observation operator. Note that this is not in general a square matrix. {v k R p ; k = 1, 2,...} is a white, Gaussian sequence, with v k N(0, R k ), often referred to as observation error. R k R p p is a symmetric positive definite covariance matrix (known as the observation error covariance matrix). 5 of 32

15 The Observations We also have discrete, linear observations that satisfy y k = H k x k + v k, k = 1, 2, 3,..., (2) where y k R p is the vector of actual measurements or observations at time t k H k R n p is the observation operator. Note that this is not in general a square matrix. {v k R p ; k = 1, 2,...} is a white, Gaussian sequence, with v k N(0, R k ), often referred to as observation error. R k R p p is a symmetric positive definite covariance matrix (known as the observation error covariance matrix). We assume that the initial state, x 0 and the noise vectors at each step, {w k }, {v k }, are assumed mutually independent. 5 of 32

16 The Prediction and Filtering Problems We suppose that there is some uncertainty in the initial state, i.e., x 0 N(0, P 0 ) (3) with P 0 R n n a symmetric positive definite covariance matrix. 6 of 32

17 The Prediction and Filtering Problems We suppose that there is some uncertainty in the initial state, i.e., x 0 N(0, P 0 ) (3) with P 0 R n n a symmetric positive definite covariance matrix. The problem is now to compute an improved estimate of the stochastic variable x k, provided y 1,... y j have been measured: x k j = x k y1,...,y j. 6 of 32

18 The Prediction and Filtering Problems We suppose that there is some uncertainty in the initial state, i.e., x 0 N(0, P 0 ) (3) with P 0 R n n a symmetric positive definite covariance matrix. The problem is now to compute an improved estimate of the stochastic variable x k, provided y 1,... y j have been measured: x k j = x k y1,...,y j. (4) When j = k this is called the filtered estimate. 6 of 32

19 The Prediction and Filtering Problems We suppose that there is some uncertainty in the initial state, i.e., x 0 N(0, P 0 ) (3) with P 0 R n n a symmetric positive definite covariance matrix. The problem is now to compute an improved estimate of the stochastic variable x k, provided y 1,... y j have been measured: x k j = x k y1,...,y j. (4) When j = k this is called the filtered estimate. When j = k 1 this is the one-step predicted, or (here) the predicted estimate. 6 of 32

20 Introduction Derivation of the Kalman Filter Kalman Filter properties Filter divergence Conclusions References 7 of 32

21 The Kalman filter (Kalman, 1960) provides estimates for the linear discrete prediction and filtering problem. 8 of 32

22 The Kalman filter (Kalman, 1960) provides estimates for the linear discrete prediction and filtering problem. We will take a minimum variance approach to deriving the filter. 8 of 32

23 The Kalman filter (Kalman, 1960) provides estimates for the linear discrete prediction and filtering problem. We will take a minimum variance approach to deriving the filter. We assume that all the relevant probability densities are Gaussian so that we can simply consider the mean and covariance. 8 of 32

24 The Kalman filter (Kalman, 1960) provides estimates for the linear discrete prediction and filtering problem. We will take a minimum variance approach to deriving the filter. We assume that all the relevant probability densities are Gaussian so that we can simply consider the mean and covariance. Rigorous justifcation and other approaches to deriving the filter are discussed by Jazwinski (1970), Chapter 7. 8 of 32

25 Prediction step We first derive the equation for one-step prediction of the mean using the state propagation model (1). x k+1 k = E [x k+1 y 1,... y k ], 9 of 32

26 Prediction step We first derive the equation for one-step prediction of the mean using the state propagation model (1). x k+1 k = E [x k+1 y 1,... y k ], = E [M k x k + w k ], 9 of 32

27 Prediction step We first derive the equation for one-step prediction of the mean using the state propagation model (1). x k+1 k = E [x k+1 y 1,... y k ], = E [M k x k + w k ], = M k x k k (5) 9 of 32

28 The one step prediction of the covariance is defined by, ] P k+1 k = E [(x k+1 x k+1 k )(x k+1 x k+1 k ) T y 1,... y k. 10 of 32

29 The one step prediction of the covariance is defined by, ] P k+1 k = E [(x k+1 x k+1 k )(x k+1 x k+1 k ) T y 1,... y k. (6) Exercise: Using the state propagation model, (1), and one-step prediction of the mean, (5), show that P k+1 k = M k P k k M T k + Q k. (7) 10 of 32

30 Filtering Step At the time of an observation, we assume that the update to the mean may be written as a linear combination of the observation and the previous estimate: x k k = x k k 1 + K k (y k H k x k k 1 ), (8) where K k R n p is known as the Kalman gain and will be derived shortly. 11 of 32

31 Filtering Step At the time of an observation, we assume that the update to the mean may be written as a linear combination of the observation and the previous estimate: x k k = x k k 1 + K k (y k H k x k k 1 ), (8) where K k R n p is known as the Kalman gain and will be derived shortly. 11 of 32

32 But first we consider the covariance associated with this estimate: ] P k k = E [(x k x k k )(x k x k k ) T y 1,... y k. 12 of 32

33 But first we consider the covariance associated with this estimate: ] P k k = E [(x k x k k )(x k x k k ) T y 1,... y k. (9) Using the observation update for the mean (8) we have, x k x k k = 12 of 32

34 But first we consider the covariance associated with this estimate: ] P k k = E [(x k x k k )(x k x k k ) T y 1,... y k. (9) Using the observation update for the mean (8) we have, x k x k k = x k x k k 1 K k (y k H k x k k 1 ) 12 of 32

35 But first we consider the covariance associated with this estimate: ] P k k = E [(x k x k k )(x k x k k ) T y 1,... y k. (9) Using the observation update for the mean (8) we have, x k x k k = x k x k k 1 K k (y k H k x k k 1 ) = x k x k k 1 K k (H k x k + v k H k x k k 1 ), replacing the observations with their model equivalent, 12 of 32

36 But first we consider the covariance associated with this estimate: ] P k k = E [(x k x k k )(x k x k k ) T y 1,... y k. (9) Using the observation update for the mean (8) we have, x k x k k = x k x k k 1 K k (y k H k x k k 1 ) = x k x k k 1 K k (H k x k + v k H k x k k 1 ), replacing the observations with their model equivalent, = (I K k H k )(x k x k k 1 ) K k v k. 12 of 32

37 But first we consider the covariance associated with this estimate: ] P k k = E [(x k x k k )(x k x k k ) T y 1,... y k. (9) Using the observation update for the mean (8) we have, x k x k k = x k x k k 1 K k (y k H k x k k 1 ) = x k x k k 1 K k (H k x k + v k H k x k k 1 ), replacing the observations with their model equivalent, = (I K k H k )(x k x k k 1 ) K k v k. (10) Thus, since the error in the prior estimate, x k x k k 1 is uncorrelated with the measurement noise we find [ ] P k k = (I K k H k )E (x k x k k 1 )(x k x k k 1 ) T (I K k H k ) T [ ] +K k E v k v T k K T k. (11) 12 of 32

38 Remark Using our established notation for the prior and observation error covariances. we obtain the formula P k k = (I K k H k )P k k 1 (I K k H k ) T + K k R k K T k. (12) This is sometimes known as the Joseph form for the covariance update. It is valid for any value of K k. If we choose the optimal Kalman gain, it can be simplified further (see below). 13 of 32

39 To provide a minimum variance estimate we must minimize trace(p k k ) over all possible values of the gain matrix K k. Exercise: Show that trace(p k k ) K k = 2(H k P k k 1 ) T + 2K k (H k P k k 1 H T + R k ). 14 of 32

40 To provide a minimum variance estimate we must minimize trace(p k k ) over all possible values of the gain matrix K k. Exercise: Show that trace(p k k ) K k = 2(H k P k k 1 ) T + 2K k (H k P k k 1 H T + R k ). (13) Setting this expression to zero and solving for K k yields the Kalman gain K k = P k k 1 H T k (H kp k k 1 H T + R k ) 1. (14) 14 of 32

41 Simplification of the a posteriori error covariance formula Using this value of the Kalman gain we are in a position to simplify the Joseph form as P k k = (I K k H k )P k k 1 (I K k H k ) T + K k R k K T k = (I K kh k )P k k 1. (15) Exercise: Show this. Note that the covariance update equation is independent of the actual measurements: so P k k could be computed in advance. 15 of 32

42 Summary of the Kalman filter Prediction step Mean update: x k+1 k = M k x k k Covariance update: P k+1 k = M k P k k M T k + Q k. Observation update step Mean update: x k k = x k k 1 + K k (y k H k x k k 1 ) Kalman gain: K k = P k k 1 H T k (H kp k k 1 H T + R k ) 1 Covariance update: P k k = (I K k H k )P k k of 32

43 Scalar Example Exercise: Suppose we have a scalar, time-invariant perfect model system such that M = 1, w = 0, Q = 0, H = 1, R = r. By combining the prediction and filtering steps, show that the following recurrence relations written in terms of prior quantities hold: x k+1 k = (1 K k )x k k 1 + K k y k (16) K k = p k+1 k = p k k 1 p k k 1 + r (17) p k k 1 r p k k 1 + r. (18) 17 of 32

44 If we divide the recurrence for p k+1 k, (18), by r on each side, and write ρ k = p k+1 k /r we have ρ k = ρ k 1 ρ k of 32

45 If we divide the recurrence for p k+1 k, (18), by r on each side, and write ρ k = p k+1 k /r we have ρ k = Solving this nonlinear recurrence we find ρ k = ρ k 1 ρ k (19) ρ kρ of 32

46 If we divide the recurrence for p k+1 k, (18), by r on each side, and write ρ k = p k+1 k /r we have ρ k = Solving this nonlinear recurrence we find ρ k = ρ k 1 ρ k (19) ρ kρ 0. (20) So ρ k 0 as k, i.e., over time the error in the state estimate becomes vanishingly small. 18 of 32

47 Introduction Derivation of the Kalman Filter Kalman Filter properties Filter divergence Conclusions References 19 of 32

48 Minimum variance and MMSE For our derivation we assumed Linear model (state propagation) and observation operator Gaussian statistics for the uncertainty in the initial state and observations We found the Kalman filter as a minimum variance estimate under these assumptions. Since the posterior estimation error is x k x k k, trace(p k k ) also represents the expected mean square error. Hence our derivation shows the Kalman filter is a minimum mean square error (MMSE) estimate. 20 of 32

49 BLUE If we stick with a linear model, but allow for non-gaussian statistics we find that the Kalman filter provides the Best Linear Unbiased Estimate or linear minimum mean square error (LMMSE) estimate. 21 of 32

50 Least squares - MAP estimate Consider the least squares problem: J k (x 0 ) = 1 ( ) T ( ) x0 x 0 P x0 x k (y l H l x l ) T R 1 2 l (y l H l x l ) subject to the constraint l=1 k 1 w T l Q 1 l w l, (21) l=0 x l+1 = M l x l + w l, l = 0, 1,..., k 1. (22) 22 of 32

51 Remarks This is the same as the weak constraint 4D-Var cost function 23 of 32

52 Remarks This is the same as the weak constraint 4D-Var cost function The minimizing solution to the least squares problem x a 0 is the maximizer of the conditional probability p(x 0, x 1,... x k y 1,... y k ) This is known as the MAP (maximum a posteriori) estimate. 23 of 32

53 Remarks This is the same as the weak constraint 4D-Var cost function The minimizing solution to the least squares problem x a 0 is the maximizer of the conditional probability p(x 0, x 1,... x k y 1,... y k ) This is known as the MAP (maximum a posteriori) estimate. The solution of the least squares problem, evolved to the end of the time window, i.e., x a k can be computed exactly as x k k in the corresponding Kalman filter problem. 23 of 32

54 Remarks This is the same as the weak constraint 4D-Var cost function The minimizing solution to the least squares problem x a 0 is the maximizer of the conditional probability p(x 0, x 1,... x k y 1,... y k ) This is known as the MAP (maximum a posteriori) estimate. The solution of the least squares problem, evolved to the end of the time window, i.e., x a k can be computed exactly as x k k in the corresponding Kalman filter problem. In view of the constraints, we could alternatively minimize the cost function with respect to x k. This leads to other forms of Kalman filter known as the recursive least squares (RLS) and information filters. This is covered by Simon (2006). 23 of 32

55 Filter stability We have seen that the Kalman filter is an optimal filter. However, optimality does not imply stability. The Kalman filter is a stable filter in exact arithmetic Stability in exact arithmetic does not imply numerical stability! 24 of 32

56 Theorem (see Jazwinski (1970)) If the dynamical system, (1), (2), is uniformly completely observable and uniformly completely controllable, then the Kalman filter is uniformly asymptotically stable. Remarks Observability measures if there is enough observation information. It takes into account the propagation of information with the model. Controllability measures if it is possible to nudge the system to the correct solution by applying appropriate increments. Uniform asymptotic stability implies that regardless of the initial data x 0, P 0, the errors in the Kalman filter solution will decay exponentially fast with time. Even with an unstable model M, the Kalman filter will stabilize the system! 25 of 32

57 Introduction Derivation of the Kalman Filter Kalman Filter properties Filter divergence Conclusions References 26 of 32

58 Despite the nice stability properties of the filter in exact arithmetic, in practice the Kalman filter does suffer from filter divergence. Filter divergence is often made manifest through overconfidence in the filter prediction (P k k too small), with subsequent observations having little effect on the estimate. Filter divergence can be causes by inaccurate descriptions of the model (and model error) dynamics, biased observations etc, as well as due to numerical roundoff errors. 27 of 32

59 Compensating for numerical and other errors Naive implementations of the Kalman filter can blow-up (see for example Verhaegen and van Dooren (1986) for error analysis). 28 of 32

60 Compensating for numerical and other errors Naive implementations of the Kalman filter can blow-up (see for example Verhaegen and van Dooren (1986) for error analysis). This can be improved by implementing a square root or UDU form of filter. (This is discussed by Bierman (1977) - more recent books provide short summaries e.g. Simon (2006), chapter 6; Grewal and Andrews (2008), chapter6). 28 of 32

61 Compensating for numerical and other errors Naive implementations of the Kalman filter can blow-up (see for example Verhaegen and van Dooren (1986) for error analysis). This can be improved by implementing a square root or UDU form of filter. (This is discussed by Bierman (1977) - more recent books provide short summaries e.g. Simon (2006), chapter 6; Grewal and Andrews (2008), chapter6). Other errors may be compensated for by increasing the model error - or equivalently using a fading memory version of the filter. This has the effect of reducing the filter s confidence in the model and hence recent observations are given more weight. 28 of 32

62 Introduction Derivation of the Kalman Filter Kalman Filter properties Filter divergence Conclusions References 29 of 32

63 Conclusions In this lecture we have Derived the Kalman filter equations Seen the optimality of the filter as a minimum variance estimate Briefly discussed the stability of the filter in exact arithmetic Pointed out the dangers of naive implementation of the Kalman filter 30 of 32

64 Some important aspects that we didn t cover Kalman smoothing Nonlinear extensions of the Kalman filter such as the Extended Kalman filter (but see Janjic lecture!) 31 of 32

65 G.J. Bierman. Factorization Methods for Discrete Sequential Estimation. Academic Press, New York, M.S. Grewal and A. P. Andrews. Kalman Filtering:Theory and Practice using MATLAB. Wiley, New Jersey, A. H. Jazwinski. Stochastic Processes and Filtering Theory. Academic Press, New York, R.E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82(1):32 45, D. Simon. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches. Wiley, New Jersey, M. Verhaegen and P. van Dooren. Numerical aspects of different kalman filter implementations. IEEE Trans. Auto. Control, AC-31 (10), of 32

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

For final project discussion every afternoon Mark and I will be available

For final project discussion every afternoon Mark and I will be available Worshop report 1. Daniels report is on website 2. Don t expect to write it based on listening to one project (we had 6 only 2 was sufficient quality) 3. I suggest writing it on one presentation. 4. Include

More information

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems Proceedings of the 17th World Congress The International Federation of Automatic Control Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems Seiichi

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft 1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

Ensemble Data Assimilation and Uncertainty Quantification

Ensemble Data Assimilation and Uncertainty Quantification Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research pg 1 What is Data Assimilation? Observations combined with a Model forecast + to produce

More information

Stability of Ensemble Kalman Filters

Stability of Ensemble Kalman Filters Stability of Ensemble Kalman Filters Idrissa S. Amour, Zubeda Mussa, Alexander Bibov, Antti Solonen, John Bardsley, Heikki Haario and Tuomo Kauranne Lappeenranta University of Technology University of

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

Kalman Filters with Uncompensated Biases

Kalman Filters with Uncompensated Biases Kalman Filters with Uncompensated Biases Renato Zanetti he Charles Stark Draper Laboratory, Houston, exas, 77058 Robert H. Bishop Marquette University, Milwaukee, WI 53201 I. INRODUCION An underlying assumption

More information

Steady-state DKF. M. Sami Fadali Professor EE UNR

Steady-state DKF. M. Sami Fadali Professor EE UNR Steady-state DKF M. Sami Fadali Professor EE UNR 1 Outline Stability of linear estimators. Lyapunov equation. Uniform exponential stability. Steady-state behavior of Lyapunov equation. Riccati equation.

More information

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems Systems Science and Applied Mathematics Vol. 1 No. 3 2016 pp. 29-37 http://www.aiscience.org/journal/ssam Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

The Kalman filter. Chapter 6

The Kalman filter. Chapter 6 Chapter 6 The Kalman filter In the last chapter, we saw that in Data Assimilation, we ultimately desire knowledge of the full a posteriori p.d.f., that is the conditional p.d.f. of the state given the

More information

Kalman Filter and Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

6.4 Kalman Filter Equations

6.4 Kalman Filter Equations 6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:

More information

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian

More information

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract

More information

Relative Merits of 4D-Var and Ensemble Kalman Filter

Relative Merits of 4D-Var and Ensemble Kalman Filter Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29

More information

Particle Filters. Outline

Particle Filters. Outline Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical

More information

Fundamentals of Data Assimilation

Fundamentals of Data Assimilation National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Kody Law Andy Majda Andrew Stuart Xin Tong Courant Institute New York University New York NY www.dtbkelly.com February 3, 2016 DPMMS, University of Cambridge

More information

Smoothers: Types and Benchmarks

Smoothers: Types and Benchmarks Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

OPTIMAL CONTROL AND ESTIMATION

OPTIMAL CONTROL AND ESTIMATION OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

Adaptive Data Assimilation and Multi-Model Fusion

Adaptive Data Assimilation and Multi-Model Fusion Adaptive Data Assimilation and Multi-Model Fusion Pierre F.J. Lermusiaux, Oleg G. Logoutov and Patrick J. Haley Jr. Mechanical Engineering and Ocean Science and Engineering, MIT We thank: Allan R. Robinson

More information

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception. The Kalman Filter (part 1) HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev http://www.cs.iastate.edu/~alex/classes/2007_spring_575x/ March 5, 2007 HCI/ComS 575X: Computational Perception

More information

6.435, System Identification

6.435, System Identification System Identification 6.435 SET 3 Nonparametric Identification Munther A. Dahleh 1 Nonparametric Methods for System ID Time domain methods Impulse response Step response Correlation analysis / time Frequency

More information

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation using Moving Horizon Estimation and Particle Filtering State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /

More information

M.Sc. in Meteorology. Numerical Weather Prediction

M.Sc. in Meteorology. Numerical Weather Prediction M.Sc. in Meteorology UCD Numerical Weather Prediction Prof Peter Lynch Meteorology & Climate Cehtre School of Mathematical Sciences University College Dublin Second Semester, 2005 2006. Text for the Course

More information

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay Lecture 6: State Space Model and Kalman Filter Bus 490, Time Series Analysis, Mr R Tsay A state space model consists of two equations: S t+ F S t + Ge t+, () Z t HS t + ɛ t (2) where S t is a state vector

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

Consider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y.

Consider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y. ATMO/OPTI 656b Spring 009 Bayesian Retrievals Note: This follows the discussion in Chapter of Rogers (000) As we have seen, the problem with the nadir viewing emission measurements is they do not contain

More information

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State

More information

6.435, System Identification

6.435, System Identification SET 6 System Identification 6.435 Parametrized model structures One-step predictor Identifiability Munther A. Dahleh 1 Models of LTI Systems A complete model u = input y = output e = noise (with PDF).

More information

Crib Sheet : Linear Kalman Smoothing

Crib Sheet : Linear Kalman Smoothing Crib Sheet : Linear Kalman Smoothing Gabriel A. Terejanu Department of Computer Science and Engineering University at Buffalo, Buffalo, NY 14260 terejanu@buffalo.edu 1 Introduction Smoothing can be separated

More information

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t

More information

EnKF and Catastrophic filter divergence

EnKF and Catastrophic filter divergence EnKF and Catastrophic filter divergence David Kelly Kody Law Andrew Stuart Mathematics Department University of North Carolina Chapel Hill NC bkelly.com June 4, 2014 SIAM UQ 2014 Savannah. David Kelly

More information

What do we know about EnKF?

What do we know about EnKF? What do we know about EnKF? David Kelly Kody Law Andrew Stuart Andrew Majda Xin Tong Courant Institute New York University New York, NY April 10, 2015 CAOS seminar, Courant. David Kelly (NYU) EnKF April

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

A Theoretical Overview on Kalman Filtering

A Theoretical Overview on Kalman Filtering A Theoretical Overview on Kalman Filtering Constantinos Mavroeidis Vanier College Presented to professors: IVANOV T. IVAN STAHN CHRISTIAN Email: cmavroeidis@gmail.com June 6, 208 Abstract Kalman filtering

More information

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Filtering Strategies for Nonlinear Systems Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors

More information

DATA ASSIMILATION FOR FLOOD FORECASTING

DATA ASSIMILATION FOR FLOOD FORECASTING DATA ASSIMILATION FOR FLOOD FORECASTING Arnold Heemin Delft University of Technology 09/16/14 1 Data assimilation is the incorporation of measurement into a numerical model to improve the model results

More information

A Crash Course on Kalman Filtering

A Crash Course on Kalman Filtering A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1 / 64 Outline Linear Systems Probability State Means and Covariances Least Squares Estimation The Kalman Filter Unknown

More information

Robust Ensemble Filtering With Improved Storm Surge Forecasting

Robust Ensemble Filtering With Improved Storm Surge Forecasting Robust Ensemble Filtering With Improved Storm Surge Forecasting U. Altaf, T. Buttler, X. Luo, C. Dawson, T. Mao, I.Hoteit Meteo France, Toulouse, Nov 13, 2012 Project Ensemble data assimilation for storm

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

Ensemble square-root filters

Ensemble square-root filters Ensemble square-root filters MICHAEL K. TIPPETT International Research Institute for climate prediction, Palisades, New Yor JEFFREY L. ANDERSON GFDL, Princeton, New Jersy CRAIG H. BISHOP Naval Research

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your

More information

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e. Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate

More information

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Laura Slivinski June, 3 Laura Slivinski (Brown University) Lagrangian Data Assimilation June, 3 / 3 Data Assimilation Setup:

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München Motivation Bayes filter is a useful tool for state

More information

Dynamic data processing recursive least-squares

Dynamic data processing recursive least-squares Dynamic data processing recursive least-squares Dynamic data processing recursive least-squares Delft University of Technology Department of Mathematical Geodesy and Positioning VSSD Series on Mathematical

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter D. Richard Brown III Worcester Polytechnic Institute 09-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 09-Apr-2009 1 /

More information

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. hree-dimensional variational assimilation (3D-Var) In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. Lorenc (1986) showed

More information

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Statistical Signal Processing Detection, Estimation, and Time Series Analysis Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture VIc (19.11.07) Contents: Maximum Likelihood Fit Maximum Likelihood (I) Assume N measurements of a random variable Assume them to be independent and distributed according

More information

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms L11. EKF SLAM: PART I NA568 Mobile Robotics: Methods & Algorithms Today s Topic EKF Feature-Based SLAM State Representation Process / Observation Models Landmark Initialization Robot-Landmark Correlation

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

Statistical Methods for Forecasting

Statistical Methods for Forecasting Statistical Methods for Forecasting BOVAS ABRAHAM University of Waterloo JOHANNES LEDOLTER University of Iowa John Wiley & Sons New York Chichester Brisbane Toronto Singapore Contents 1 INTRODUCTION AND

More information

Least Squares Estimation Namrata Vaswani,

Least Squares Estimation Namrata Vaswani, Least Squares Estimation Namrata Vaswani, namrata@iastate.edu Least Squares Estimation 1 Recall: Geometric Intuition for Least Squares Minimize J(x) = y Hx 2 Solution satisfies: H T H ˆx = H T y, i.e.

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

The Unscented Particle Filter

The Unscented Particle Filter The Unscented Particle Filter Rudolph van der Merwe (OGI) Nando de Freitas (UC Bereley) Arnaud Doucet (Cambridge University) Eric Wan (OGI) Outline Optimal Estimation & Filtering Optimal Recursive Bayesian

More information

Lecture Note 2: Estimation and Information Theory

Lecture Note 2: Estimation and Information Theory Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 2: Estimation and Information Theory Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 2.1 Estimation A static estimation problem

More information

ik () uk () Today s menu Last lecture Some definitions Repeatability of sensing elements

ik () uk () Today s menu Last lecture Some definitions Repeatability of sensing elements Last lecture Overview of the elements of measurement systems. Sensing elements. Signal conditioning elements. Signal processing elements. Data presentation elements. Static characteristics of measurement

More information

Kalman filter using the orthogonality principle

Kalman filter using the orthogonality principle Appendix G Kalman filter using the orthogonality principle This appendix presents derivation steps for obtaining the discrete Kalman filter equations using a method based on the orthogonality principle.

More information

From Bayes to Extended Kalman Filter

From Bayes to Extended Kalman Filter From Bayes to Extended Kalman Filter Michal Reinštein Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/

More information

Conditions for Suboptimal Filter Stability in SLAM

Conditions for Suboptimal Filter Stability in SLAM Conditions for Suboptimal Filter Stability in SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto and Alberto Sanfeliu Institut de Robòtica i Informàtica Industrial, UPC-CSIC Llorens Artigas -, Barcelona, Spain

More information

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise

More information

Application of the Ensemble Kalman Filter to History Matching

Application of the Ensemble Kalman Filter to History Matching Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection REGLERTEKNIK Lecture Outline AUTOMATIC CONTROL Target Tracking: Lecture 3 Maneuvering Target Tracking Issues Maneuver Detection Emre Özkan emre@isy.liu.se Division of Automatic Control Department of Electrical

More information

Mathematical Concepts of Data Assimilation

Mathematical Concepts of Data Assimilation Mathematical Concepts of Data Assimilation N.K. Nichols 1 Introduction Environmental systems can be realistically described by mathematical and numerical models of the system dynamics. These models can

More information

Basic concepts in estimation

Basic concepts in estimation Basic concepts in estimation Random and nonrandom parameters Definitions of estimates ML Maimum Lielihood MAP Maimum A Posteriori LS Least Squares MMS Minimum Mean square rror Measures of quality of estimates

More information

RECURSIVE IMPLEMENTATIONS OF THE SCHMIDT-KALMAN CONSIDER FILTER

RECURSIVE IMPLEMENTATIONS OF THE SCHMIDT-KALMAN CONSIDER FILTER RECURSIVE IMPLEMENTATIONS OF THE SCHMIDT-KALMAN CONSIDER FILTER Renato Zanetti and Christopher D Souza One method to account for parameters errors in the Kalman filter is to consider their effect in the

More information

The Kalman Filter. An Algorithm for Dealing with Uncertainty. Steven Janke. May Steven Janke (Seminar) The Kalman Filter May / 29

The Kalman Filter. An Algorithm for Dealing with Uncertainty. Steven Janke. May Steven Janke (Seminar) The Kalman Filter May / 29 The Kalman Filter An Algorithm for Dealing with Uncertainty Steven Janke May 2011 Steven Janke (Seminar) The Kalman Filter May 2011 1 / 29 Autonomous Robots Steven Janke (Seminar) The Kalman Filter May

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

A Study of Covariances within Basic and Extended Kalman Filters

A Study of Covariances within Basic and Extended Kalman Filters A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying

More information

A TERM PAPER REPORT ON KALMAN FILTER

A TERM PAPER REPORT ON KALMAN FILTER A TERM PAPER REPORT ON KALMAN FILTER By B. Sivaprasad (Y8104059) CH. Venkata Karunya (Y8104066) Department of Electrical Engineering Indian Institute of Technology, Kanpur Kanpur-208016 SCALAR KALMAN FILTER

More information

DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation.

DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation. DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation. UCAR 2014 The National Center for Atmospheric Research is sponsored by the National

More information

Adaptive ensemble Kalman filtering of nonlinear systems

Adaptive ensemble Kalman filtering of nonlinear systems Adaptive ensemble Kalman filtering of nonlinear systems Tyrus Berry George Mason University June 12, 213 : Problem Setup We consider a system of the form: x k+1 = f (x k ) + ω k+1 ω N (, Q) y k+1 = h(x

More information

Data Assimilation Research Testbed Tutorial

Data Assimilation Research Testbed Tutorial Data Assimilation Research Testbed Tutorial Section 2: How should observations of a state variable impact an unobserved state variable? Multivariate assimilation. Single observed variable, single unobserved

More information

Least Squares and Kalman Filtering Questions: me,

Least Squares and Kalman Filtering Questions:  me, Least Squares and Kalman Filtering Questions: Email me, namrata@ece.gatech.edu Least Squares and Kalman Filtering 1 Recall: Weighted Least Squares y = Hx + e Minimize Solution: J(x) = (y Hx) T W (y Hx)

More information

Introduction to data assimilation and least squares methods

Introduction to data assimilation and least squares methods Introduction to data assimilation and least squares methods Eugenia Kalnay and many friends University of Maryland October 008 (part 1 Contents (1 Forecasting the weather - we are really getting better!

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006 Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems Eric Kostelich Data Mining Seminar, Feb. 6, 2006 kostelich@asu.edu Co-Workers Istvan Szunyogh, Gyorgyi Gyarmati, Ed Ott, Brian

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. XIII - Nonlinear Observers - A. J. Krener

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. XIII - Nonlinear Observers - A. J. Krener NONLINEAR OBSERVERS A. J. Krener University of California, Davis, CA, USA Keywords: nonlinear observer, state estimation, nonlinear filtering, observability, high gain observers, minimum energy estimation,

More information

The Kalman filter is arguably one of the most notable algorithms

The Kalman filter is arguably one of the most notable algorithms LECTURE E NOTES «Kalman Filtering with Newton s Method JEFFREY HUMPHERYS and JEREMY WEST The Kalman filter is arguably one of the most notable algorithms of the 0th century [1]. In this article, we derive

More information

Forecasting and data assimilation

Forecasting and data assimilation Supported by the National Science Foundation DMS Forecasting and data assimilation Outline Numerical models Kalman Filter Ensembles Douglas Nychka, Thomas Bengtsson, Chris Snyder Geophysical Statistics

More information

Ensemble Kalman Filter

Ensemble Kalman Filter Ensemble Kalman Filter Geir Evensen and Laurent Bertino Hydro Research Centre, Bergen, Norway, Nansen Environmental and Remote Sensing Center, Bergen, Norway The Ensemble Kalman Filter (EnKF) Represents

More information

COS Lecture 16 Autonomous Robot Navigation

COS Lecture 16 Autonomous Robot Navigation COS 495 - Lecture 16 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization

More information