L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms
2 PS2 is out! Landmark-based Localization: EKF, UKF, PF
Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian systems Optimal Unbiased Estimator Non-Gaussian systems Best Linear Unbiased Estimator (BLUE) KF Falling Body Demo
Estimation is.. Data Estimation Engine Estimate Prior Beliefs From P. Newman, Oxford
Minimum Mean Squared Error Estimation Choose so argument is minimised Expectation operator ( average ) From P. Newman, Oxford
Evaluating. From probability theory Very Important Thing From P. Newman, Oxford
Recursive Bayesian Estimation Key idea: one mans posterior is another s prior ;-) Sequence of data (measurements) We want conditional mean (mmse) of x given Z k Can we iteratively calculate this i.e. every time a new measurement comes in, update our estimate? From P. Newman, Oxford
Yes Implicitly dropped dependence on Z k-1 At time k Explains data at time k At time k-1 as function of x at time k And if these distributions are Gaussian turning the handle leads to the Kalman filter From P. Newman, Oxford
Bayes Filter Reminder Prior Prediction Correction
Kalman Filter Distribution Everything is Gaussian 1D 3D Courtesy: K. Arras
Properties of Gaussians Univariate Multivariate We stay in the Gaussian world as long as we start with Gaussians and perform only linear transformations.
12 Gaussian Covariance & Information Parameterizations: A Dual Relationship Covariance Form Information Form Marginalization (sub-block) (Schur complement) Conditioning (Schur complement) (sub-block)
Discrete Kalman Filter 13 Estimates the (n 1) state x t of a discrete-time controlled process that is governed by the linear stochastic difference equation Observed through (k 1) measurements z t
Components of a Kalman Filter 14 A t Matrix (n n) that describes how the state evolves from t-1 to t without controls or noise. B t C t δ t Matrix (n m) that describes how the control u t changes the state from t-1 to t. Matrix (k n) that describes a projection of state x t to an observation z t. Random variables representing the process and measurement noise that are assumed to be independent and normally distributed with covariance R t and Q t, respectively.
Linear Gaussian Systems: Initialization 15 Initial belief is normally distributed:
Linear Gaussian Systems: Dynamics 16 Dynamics are linear function of state and control plus additive noise:
17 Linear Gaussian Systems: Dynamics
Linear Gaussian Systems: Observations 18 Observations are linear function of state plus additive noise:
19 Linear Gaussian Systems: Observations
20 Gaussian Covariance & Information Parameterizations: A Dual Relationship Covariance Form Information Form Marginalization (sub-block) (Schur complement) Conditioning (Schur complement) (sub-block)
Kalman Filter Algorithm
1D Kalman Filter Example (1) prediction measurement correction It's a weighted mean!
1D Kalman Filter Example (2) prediction correction measurement
The Prediction-Correction-Cycle 24 Prediction prediction
The Prediction-Correction-Cycle 25 correction measurement Correction
The Prediction-Correction-Cycle 26 Prediction Correction
Alternative (Equivalent) Covariance Update Expressions Defining innovation/observation covariance as Alternative Update Expressions (see Bar-Shalom Chap 5) 1. 2. 3. Joseph form
What if the statistics are not Gaussian? Structure of KF corresponds to the Best Linear Unbiased Estimator (BLUE) i.e., if we restrict our estimator to the class of linear estimators, then the KF is the best linear MMSE estimator* Affine function of z Estimator Matrix MSE Remarks The best estimator (in the MMSE sense) for Gaussian Random variables is identical to The best linear estimator for arbitrarily distributed random variables with the same firstand second-order moments. This result will be proved (and used) in L07 *Note: a nonlinear estimator could do better!
Falling Body Example Governing Equations CT State-Space Description I.C. y 0, v 0 DT State-Space Description y
ProbRob Notation Initial State Process Model Observation Model
Falling Body: Prediction Process model builds correlation beetween position and velocity Falling body position uncertainty increases during open-loop prediction due to uncertainty in velocity
Falling Body: Correction 0 Zero state uncertainty case:
Falling Body: Correction 1 Infinite state uncertainty case:
Noisy Range Measurements (σ = 12 m) 120 Noisy Measured Falling Body Position 100 80 Measured y [m] 60 40 20 0-20 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 t [s]
180 Estimated Position with 3 Sigma Error Bounds KF Results 160 140 Position 3σ y true y est 120 100 y [m] 80 y [m] 120 100 80 60 40 20 0 x 0 = [89.1859, -8.32792] T diag(p 0 ) 0.5 = [25, 5] T R 0.5 = 12 Estimated Falling Body Position y true y meas -20 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 t [s] y est v [m/s] 60 40 20 0 x 0 = [89.1859, -8.32792] T diag(p 0 ) 0.5 = [25, 5] T R 0.5 = 12-20 -1 0 1 2 3 4 5 t [s] 10 0-10 -20-30 Estimated Velocity with 3 Sigma Error Bounds Velocity 3σ x 0 = [89.1859, -8.32792] T v true v est -40 diag(p 0 ) 0.5 = [25, 5] T R 0.5 = 12-50 -1 0 1 2 3 4 5 t [s]
Commonly used: the i j notation true estimated This is useful for derivations but we can never use it in a calc as x is unknown truth! Data up to t=j
prediction is last estimate no New control u(k) available? yes Input to plant enters here e.g steering wheel angle Prediction x(k k-1) = x(k-1 k-1) P(k k-1) = P(k-1 k-1) x(k k-1) = Fx(k-1 k-1) +Bu(k) P(k k-1) = F P(k-1 k-1) F T +GQG T Delay no New observation z(k) available? yes Data from sensors enter algorithm here e.g compass measurement Prepare For Update x(k k) = x(k k-1) P(k k) = P(k k-1) S = H P(k k-1) H T +R W = P(k k-1) H T S -1 v(k) = z(k) - H x(k k-1) estimate is last prediction Update k=k+1 x(k k) = x(k k-1)+ Wv(k) P(k k) = P(k k-1) - W S W T
Kalman Filter Summary 39 Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k 2.376 + n 2 ) Optimal for linear Gaussian systems! Most robotics systems are nonlinear!