The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

Similar documents
4 Derivations of the Discrete-Time Kalman Filter

For final project discussion every afternoon Mark and I will be available

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems

CS 532: 3D Computer Vision 6 th Set of Notes

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Ensemble Data Assimilation and Uncertainty Quantification

Stability of Ensemble Kalman Filters

State Estimation of Linear and Nonlinear Dynamic Systems

Kalman Filters with Uncompensated Biases

Steady-state DKF. M. Sami Fadali Professor EE UNR

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems

2D Image Processing. Bayes filter implementation: Kalman filter

The Kalman filter. Chapter 6

Kalman Filter and Ensemble Kalman Filter

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

6.4 Kalman Filter Equations

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Relative Merits of 4D-Var and Ensemble Kalman Filter

Particle Filters. Outline

Fundamentals of Data Assimilation

Data assimilation in high dimensions

Smoothers: Types and Benchmarks

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

OPTIMAL CONTROL AND ESTIMATION

2D Image Processing. Bayes filter implementation: Kalman filter

Gaussian Process Approximations of Stochastic Differential Equations

Adaptive Data Assimilation and Multi-Model Fusion

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

6.435, System Identification

State Estimation using Moving Horizon Estimation and Particle Filtering

M.Sc. in Meteorology. Numerical Weather Prediction

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

Data assimilation in high dimensions

Consider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y.

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

6.435, System Identification

Crib Sheet : Linear Kalman Smoothing

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

EnKF and Catastrophic filter divergence

What do we know about EnKF?

Ergodicity in data assimilation methods

A Theoretical Overview on Kalman Filtering

Gaussian Filtering Strategies for Nonlinear Systems

DATA ASSIMILATION FOR FLOOD FORECASTING

A Crash Course on Kalman Filtering

Robust Ensemble Filtering With Improved Storm Surge Forecasting

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Ensemble square-root filters

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

If we want to analyze experimental or simulated data we might encounter the following tasks:

Autonomous Navigation for Flying Robots

Dynamic data processing recursive least-squares

The Kalman Filter ImPr Talk

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Fundamentals of Data Assimila1on

Modern Methods of Data Analysis - WS 07/08

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

2D Image Processing (Extended) Kalman and particle filter

Statistical Methods for Forecasting

Least Squares Estimation Namrata Vaswani,

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

The Unscented Particle Filter

Lecture Note 2: Estimation and Information Theory

ik () uk () Today s menu Last lecture Some definitions Repeatability of sensing elements

Kalman filter using the orthogonality principle

From Bayes to Extended Kalman Filter

Conditions for Suboptimal Filter Stability in SLAM

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Application of the Ensemble Kalman Filter to History Matching

Cooperative Communication with Feedback via Stochastic Approximation

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection

Mathematical Concepts of Data Assimilation

Basic concepts in estimation

RECURSIVE IMPLEMENTATIONS OF THE SCHMIDT-KALMAN CONSIDER FILTER

The Kalman Filter. An Algorithm for Dealing with Uncertainty. Steven Janke. May Steven Janke (Seminar) The Kalman Filter May / 29

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

A Study of Covariances within Basic and Extended Kalman Filters

A TERM PAPER REPORT ON KALMAN FILTER

DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation.

Adaptive ensemble Kalman filtering of nonlinear systems

Data Assimilation Research Testbed Tutorial

Least Squares and Kalman Filtering Questions: me,

Introduction to data assimilation and least squares methods

Statistical and Adaptive Signal Processing

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. XIII - Nonlinear Observers - A. J. Krener

The Kalman filter is arguably one of the most notable algorithms

Forecasting and data assimilation

Ensemble Kalman Filter

COS Lecture 16 Autonomous Robot Navigation

Transcription:

The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July 2013 1 of 32

Outline Introduction Derivation of the Kalman Filter Kalman Filter properties Filter divergence Conclusions References 2 of 32

State estimation feedback loop 3 of 32

The Model Consider the discrete, linear system, where x k+1 = M k x k + w k, k = 0, 1, 2,..., (1) 4 of 32

The Model Consider the discrete, linear system, x k+1 = M k x k + w k, k = 0, 1, 2,..., (1) where x k R n is the state vector at time t k 4 of 32

The Model Consider the discrete, linear system, where x k+1 = M k x k + w k, k = 0, 1, 2,..., (1) x k R n is the state vector at time t k M k R n n is the state transition matrix (mapping from time t k to t k+1 ) or model 4 of 32

The Model Consider the discrete, linear system, where x k+1 = M k x k + w k, k = 0, 1, 2,..., (1) x k R n is the state vector at time t k M k R n n is the state transition matrix (mapping from time t k to t k+1 ) or model {w k R n ; k = 0, 1, 2,...} is a white, Gaussian sequence, with w k N(0, Q k ), often referred to as model error 4 of 32

The Model Consider the discrete, linear system, where x k+1 = M k x k + w k, k = 0, 1, 2,..., (1) x k R n is the state vector at time t k M k R n n is the state transition matrix (mapping from time t k to t k+1 ) or model {w k R n ; k = 0, 1, 2,...} is a white, Gaussian sequence, with w k N(0, Q k ), often referred to as model error Q k R n n is a symmetric positive definite covariance matrix (known as the model error covariance matrix). 4 of 32

The Model Consider the discrete, linear system, where x k+1 = M k x k + w k, k = 0, 1, 2,..., (1) x k R n is the state vector at time t k M k R n n is the state transition matrix (mapping from time t k to t k+1 ) or model {w k R n ; k = 0, 1, 2,...} is a white, Gaussian sequence, with w k N(0, Q k ), often referred to as model error Q k R n n is a symmetric positive definite covariance matrix (known as the model error covariance matrix). 4 of 32

The Observations We also have discrete, linear observations that satisfy y k = H k x k + v k, k = 1, 2, 3,..., (2) where 5 of 32

The Observations We also have discrete, linear observations that satisfy y k = H k x k + v k, k = 1, 2, 3,..., (2) where y k R p is the vector of actual measurements or observations at time t k 5 of 32

The Observations We also have discrete, linear observations that satisfy y k = H k x k + v k, k = 1, 2, 3,..., (2) where y k R p is the vector of actual measurements or observations at time t k H k R n p is the observation operator. Note that this is not in general a square matrix. 5 of 32

The Observations We also have discrete, linear observations that satisfy y k = H k x k + v k, k = 1, 2, 3,..., (2) where y k R p is the vector of actual measurements or observations at time t k H k R n p is the observation operator. Note that this is not in general a square matrix. {v k R p ; k = 1, 2,...} is a white, Gaussian sequence, with v k N(0, R k ), often referred to as observation error. 5 of 32

The Observations We also have discrete, linear observations that satisfy y k = H k x k + v k, k = 1, 2, 3,..., (2) where y k R p is the vector of actual measurements or observations at time t k H k R n p is the observation operator. Note that this is not in general a square matrix. {v k R p ; k = 1, 2,...} is a white, Gaussian sequence, with v k N(0, R k ), often referred to as observation error. R k R p p is a symmetric positive definite covariance matrix (known as the observation error covariance matrix). 5 of 32

The Observations We also have discrete, linear observations that satisfy y k = H k x k + v k, k = 1, 2, 3,..., (2) where y k R p is the vector of actual measurements or observations at time t k H k R n p is the observation operator. Note that this is not in general a square matrix. {v k R p ; k = 1, 2,...} is a white, Gaussian sequence, with v k N(0, R k ), often referred to as observation error. R k R p p is a symmetric positive definite covariance matrix (known as the observation error covariance matrix). We assume that the initial state, x 0 and the noise vectors at each step, {w k }, {v k }, are assumed mutually independent. 5 of 32

The Prediction and Filtering Problems We suppose that there is some uncertainty in the initial state, i.e., x 0 N(0, P 0 ) (3) with P 0 R n n a symmetric positive definite covariance matrix. 6 of 32

The Prediction and Filtering Problems We suppose that there is some uncertainty in the initial state, i.e., x 0 N(0, P 0 ) (3) with P 0 R n n a symmetric positive definite covariance matrix. The problem is now to compute an improved estimate of the stochastic variable x k, provided y 1,... y j have been measured: x k j = x k y1,...,y j. 6 of 32

The Prediction and Filtering Problems We suppose that there is some uncertainty in the initial state, i.e., x 0 N(0, P 0 ) (3) with P 0 R n n a symmetric positive definite covariance matrix. The problem is now to compute an improved estimate of the stochastic variable x k, provided y 1,... y j have been measured: x k j = x k y1,...,y j. (4) When j = k this is called the filtered estimate. 6 of 32

The Prediction and Filtering Problems We suppose that there is some uncertainty in the initial state, i.e., x 0 N(0, P 0 ) (3) with P 0 R n n a symmetric positive definite covariance matrix. The problem is now to compute an improved estimate of the stochastic variable x k, provided y 1,... y j have been measured: x k j = x k y1,...,y j. (4) When j = k this is called the filtered estimate. When j = k 1 this is the one-step predicted, or (here) the predicted estimate. 6 of 32

Introduction Derivation of the Kalman Filter Kalman Filter properties Filter divergence Conclusions References 7 of 32

The Kalman filter (Kalman, 1960) provides estimates for the linear discrete prediction and filtering problem. 8 of 32

The Kalman filter (Kalman, 1960) provides estimates for the linear discrete prediction and filtering problem. We will take a minimum variance approach to deriving the filter. 8 of 32

The Kalman filter (Kalman, 1960) provides estimates for the linear discrete prediction and filtering problem. We will take a minimum variance approach to deriving the filter. We assume that all the relevant probability densities are Gaussian so that we can simply consider the mean and covariance. 8 of 32

The Kalman filter (Kalman, 1960) provides estimates for the linear discrete prediction and filtering problem. We will take a minimum variance approach to deriving the filter. We assume that all the relevant probability densities are Gaussian so that we can simply consider the mean and covariance. Rigorous justifcation and other approaches to deriving the filter are discussed by Jazwinski (1970), Chapter 7. 8 of 32

Prediction step We first derive the equation for one-step prediction of the mean using the state propagation model (1). x k+1 k = E [x k+1 y 1,... y k ], 9 of 32

Prediction step We first derive the equation for one-step prediction of the mean using the state propagation model (1). x k+1 k = E [x k+1 y 1,... y k ], = E [M k x k + w k ], 9 of 32

Prediction step We first derive the equation for one-step prediction of the mean using the state propagation model (1). x k+1 k = E [x k+1 y 1,... y k ], = E [M k x k + w k ], = M k x k k (5) 9 of 32

The one step prediction of the covariance is defined by, ] P k+1 k = E [(x k+1 x k+1 k )(x k+1 x k+1 k ) T y 1,... y k. 10 of 32

The one step prediction of the covariance is defined by, ] P k+1 k = E [(x k+1 x k+1 k )(x k+1 x k+1 k ) T y 1,... y k. (6) Exercise: Using the state propagation model, (1), and one-step prediction of the mean, (5), show that P k+1 k = M k P k k M T k + Q k. (7) 10 of 32

Filtering Step At the time of an observation, we assume that the update to the mean may be written as a linear combination of the observation and the previous estimate: x k k = x k k 1 + K k (y k H k x k k 1 ), (8) where K k R n p is known as the Kalman gain and will be derived shortly. 11 of 32

Filtering Step At the time of an observation, we assume that the update to the mean may be written as a linear combination of the observation and the previous estimate: x k k = x k k 1 + K k (y k H k x k k 1 ), (8) where K k R n p is known as the Kalman gain and will be derived shortly. 11 of 32

But first we consider the covariance associated with this estimate: ] P k k = E [(x k x k k )(x k x k k ) T y 1,... y k. 12 of 32

But first we consider the covariance associated with this estimate: ] P k k = E [(x k x k k )(x k x k k ) T y 1,... y k. (9) Using the observation update for the mean (8) we have, x k x k k = 12 of 32

But first we consider the covariance associated with this estimate: ] P k k = E [(x k x k k )(x k x k k ) T y 1,... y k. (9) Using the observation update for the mean (8) we have, x k x k k = x k x k k 1 K k (y k H k x k k 1 ) 12 of 32

But first we consider the covariance associated with this estimate: ] P k k = E [(x k x k k )(x k x k k ) T y 1,... y k. (9) Using the observation update for the mean (8) we have, x k x k k = x k x k k 1 K k (y k H k x k k 1 ) = x k x k k 1 K k (H k x k + v k H k x k k 1 ), replacing the observations with their model equivalent, 12 of 32

But first we consider the covariance associated with this estimate: ] P k k = E [(x k x k k )(x k x k k ) T y 1,... y k. (9) Using the observation update for the mean (8) we have, x k x k k = x k x k k 1 K k (y k H k x k k 1 ) = x k x k k 1 K k (H k x k + v k H k x k k 1 ), replacing the observations with their model equivalent, = (I K k H k )(x k x k k 1 ) K k v k. 12 of 32

But first we consider the covariance associated with this estimate: ] P k k = E [(x k x k k )(x k x k k ) T y 1,... y k. (9) Using the observation update for the mean (8) we have, x k x k k = x k x k k 1 K k (y k H k x k k 1 ) = x k x k k 1 K k (H k x k + v k H k x k k 1 ), replacing the observations with their model equivalent, = (I K k H k )(x k x k k 1 ) K k v k. (10) Thus, since the error in the prior estimate, x k x k k 1 is uncorrelated with the measurement noise we find [ ] P k k = (I K k H k )E (x k x k k 1 )(x k x k k 1 ) T (I K k H k ) T [ ] +K k E v k v T k K T k. (11) 12 of 32

Remark Using our established notation for the prior and observation error covariances. we obtain the formula P k k = (I K k H k )P k k 1 (I K k H k ) T + K k R k K T k. (12) This is sometimes known as the Joseph form for the covariance update. It is valid for any value of K k. If we choose the optimal Kalman gain, it can be simplified further (see below). 13 of 32

To provide a minimum variance estimate we must minimize trace(p k k ) over all possible values of the gain matrix K k. Exercise: Show that trace(p k k ) K k = 2(H k P k k 1 ) T + 2K k (H k P k k 1 H T + R k ). 14 of 32

To provide a minimum variance estimate we must minimize trace(p k k ) over all possible values of the gain matrix K k. Exercise: Show that trace(p k k ) K k = 2(H k P k k 1 ) T + 2K k (H k P k k 1 H T + R k ). (13) Setting this expression to zero and solving for K k yields the Kalman gain K k = P k k 1 H T k (H kp k k 1 H T + R k ) 1. (14) 14 of 32

Simplification of the a posteriori error covariance formula Using this value of the Kalman gain we are in a position to simplify the Joseph form as P k k = (I K k H k )P k k 1 (I K k H k ) T + K k R k K T k = (I K kh k )P k k 1. (15) Exercise: Show this. Note that the covariance update equation is independent of the actual measurements: so P k k could be computed in advance. 15 of 32

Summary of the Kalman filter Prediction step Mean update: x k+1 k = M k x k k Covariance update: P k+1 k = M k P k k M T k + Q k. Observation update step Mean update: x k k = x k k 1 + K k (y k H k x k k 1 ) Kalman gain: K k = P k k 1 H T k (H kp k k 1 H T + R k ) 1 Covariance update: P k k = (I K k H k )P k k 1. 16 of 32

Scalar Example Exercise: Suppose we have a scalar, time-invariant perfect model system such that M = 1, w = 0, Q = 0, H = 1, R = r. By combining the prediction and filtering steps, show that the following recurrence relations written in terms of prior quantities hold: x k+1 k = (1 K k )x k k 1 + K k y k (16) K k = p k+1 k = p k k 1 p k k 1 + r (17) p k k 1 r p k k 1 + r. (18) 17 of 32

If we divide the recurrence for p k+1 k, (18), by r on each side, and write ρ k = p k+1 k /r we have ρ k = ρ k 1 ρ k 1 + 1. 18 of 32

If we divide the recurrence for p k+1 k, (18), by r on each side, and write ρ k = p k+1 k /r we have ρ k = Solving this nonlinear recurrence we find ρ k = ρ k 1 ρ k 1 + 1. (19) ρ 0 1 + kρ 0. 18 of 32

If we divide the recurrence for p k+1 k, (18), by r on each side, and write ρ k = p k+1 k /r we have ρ k = Solving this nonlinear recurrence we find ρ k = ρ k 1 ρ k 1 + 1. (19) ρ 0 1 + kρ 0. (20) So ρ k 0 as k, i.e., over time the error in the state estimate becomes vanishingly small. 18 of 32

Introduction Derivation of the Kalman Filter Kalman Filter properties Filter divergence Conclusions References 19 of 32

Minimum variance and MMSE For our derivation we assumed Linear model (state propagation) and observation operator Gaussian statistics for the uncertainty in the initial state and observations We found the Kalman filter as a minimum variance estimate under these assumptions. Since the posterior estimation error is x k x k k, trace(p k k ) also represents the expected mean square error. Hence our derivation shows the Kalman filter is a minimum mean square error (MMSE) estimate. 20 of 32

BLUE If we stick with a linear model, but allow for non-gaussian statistics we find that the Kalman filter provides the Best Linear Unbiased Estimate or linear minimum mean square error (LMMSE) estimate. 21 of 32

Least squares - MAP estimate Consider the least squares problem: J k (x 0 ) = 1 ( ) T ( ) x0 x 0 P 1 2 0 x0 x 0 + 1 k (y l H l x l ) T R 1 2 l (y l H l x l ) + 1 2 subject to the constraint l=1 k 1 w T l Q 1 l w l, (21) l=0 x l+1 = M l x l + w l, l = 0, 1,..., k 1. (22) 22 of 32

Remarks This is the same as the weak constraint 4D-Var cost function 23 of 32

Remarks This is the same as the weak constraint 4D-Var cost function The minimizing solution to the least squares problem x a 0 is the maximizer of the conditional probability p(x 0, x 1,... x k y 1,... y k ) This is known as the MAP (maximum a posteriori) estimate. 23 of 32

Remarks This is the same as the weak constraint 4D-Var cost function The minimizing solution to the least squares problem x a 0 is the maximizer of the conditional probability p(x 0, x 1,... x k y 1,... y k ) This is known as the MAP (maximum a posteriori) estimate. The solution of the least squares problem, evolved to the end of the time window, i.e., x a k can be computed exactly as x k k in the corresponding Kalman filter problem. 23 of 32

Remarks This is the same as the weak constraint 4D-Var cost function The minimizing solution to the least squares problem x a 0 is the maximizer of the conditional probability p(x 0, x 1,... x k y 1,... y k ) This is known as the MAP (maximum a posteriori) estimate. The solution of the least squares problem, evolved to the end of the time window, i.e., x a k can be computed exactly as x k k in the corresponding Kalman filter problem. In view of the constraints, we could alternatively minimize the cost function with respect to x k. This leads to other forms of Kalman filter known as the recursive least squares (RLS) and information filters. This is covered by Simon (2006). 23 of 32

Filter stability We have seen that the Kalman filter is an optimal filter. However, optimality does not imply stability. The Kalman filter is a stable filter in exact arithmetic Stability in exact arithmetic does not imply numerical stability! 24 of 32

Theorem (see Jazwinski (1970)) If the dynamical system, (1), (2), is uniformly completely observable and uniformly completely controllable, then the Kalman filter is uniformly asymptotically stable. Remarks Observability measures if there is enough observation information. It takes into account the propagation of information with the model. Controllability measures if it is possible to nudge the system to the correct solution by applying appropriate increments. Uniform asymptotic stability implies that regardless of the initial data x 0, P 0, the errors in the Kalman filter solution will decay exponentially fast with time. Even with an unstable model M, the Kalman filter will stabilize the system! 25 of 32

Introduction Derivation of the Kalman Filter Kalman Filter properties Filter divergence Conclusions References 26 of 32

Despite the nice stability properties of the filter in exact arithmetic, in practice the Kalman filter does suffer from filter divergence. Filter divergence is often made manifest through overconfidence in the filter prediction (P k k too small), with subsequent observations having little effect on the estimate. Filter divergence can be causes by inaccurate descriptions of the model (and model error) dynamics, biased observations etc, as well as due to numerical roundoff errors. 27 of 32

Compensating for numerical and other errors Naive implementations of the Kalman filter can blow-up (see for example Verhaegen and van Dooren (1986) for error analysis). 28 of 32

Compensating for numerical and other errors Naive implementations of the Kalman filter can blow-up (see for example Verhaegen and van Dooren (1986) for error analysis). This can be improved by implementing a square root or UDU form of filter. (This is discussed by Bierman (1977) - more recent books provide short summaries e.g. Simon (2006), chapter 6; Grewal and Andrews (2008), chapter6). 28 of 32

Compensating for numerical and other errors Naive implementations of the Kalman filter can blow-up (see for example Verhaegen and van Dooren (1986) for error analysis). This can be improved by implementing a square root or UDU form of filter. (This is discussed by Bierman (1977) - more recent books provide short summaries e.g. Simon (2006), chapter 6; Grewal and Andrews (2008), chapter6). Other errors may be compensated for by increasing the model error - or equivalently using a fading memory version of the filter. This has the effect of reducing the filter s confidence in the model and hence recent observations are given more weight. 28 of 32

Introduction Derivation of the Kalman Filter Kalman Filter properties Filter divergence Conclusions References 29 of 32

Conclusions In this lecture we have Derived the Kalman filter equations Seen the optimality of the filter as a minimum variance estimate Briefly discussed the stability of the filter in exact arithmetic Pointed out the dangers of naive implementation of the Kalman filter 30 of 32

Some important aspects that we didn t cover Kalman smoothing Nonlinear extensions of the Kalman filter such as the Extended Kalman filter (but see Janjic lecture!) 31 of 32

G.J. Bierman. Factorization Methods for Discrete Sequential Estimation. Academic Press, New York, 1977. M.S. Grewal and A. P. Andrews. Kalman Filtering:Theory and Practice using MATLAB. Wiley, New Jersey, 2008. A. H. Jazwinski. Stochastic Processes and Filtering Theory. Academic Press, New York, 1970. R.E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82(1):32 45, 1960. D. Simon. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches. Wiley, New Jersey, 2006. M. Verhaegen and P. van Dooren. Numerical aspects of different kalman filter implementations. IEEE Trans. Auto. Control, AC-31 (10), 1986. 32 of 32