E190Q Lecture 11 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 013 1 Figures courtesy of Siegwart & Nourbakhsh
Control Structures Planning Based Control Prior Knowledge Operator Commands Localization Cognition Perception Motion Control
Kalman Filter Localization Introduction to Kalman Filters 1. KF Representations. Two Measurement Sensor Fusion 3. Single Variable Kalman Filtering 4. Multi-Variable KF Representations Kalman Filter Localization 3
KF Representations What do Kalman Filters use to represent the states being estimated? Gaussian Distributions! 4
KF Representations 5 Single variable Gaussian Distribution Symmetrical Uni-modal Characterized by Mean µ Variance σ Properties Propagation of errors Product of Gaussians
KF Representations Single Var. Gaussian Characterization Mean Expected value of a random variable with a continuous Probability Density Function p(x) µ = E[X] = x p(x) dx For a discrete set of K samples K µ = Σ x k /K k=1 6
KF Representations Single Var. Gaussian Characterization Variance Expected value of the difference from the mean squared σ =E[(X-µ) ] = (x µ) p(x) dx For a discrete set of K samples σ = Σ (x k µ ) /K K k=1 7
KF Representations Single variable Gaussian Properties Propagation of Errors 8
KF Representations Single variable Gaussian Properties Product of Gaussians 9
KF Representations Single variable Gaussian Properties We stay in the Gaussian world as long as we start with Gaussians and perform only linear transformations. 10
Kalman Filter Localization Introduction to Kalman Filters 1. KF Representations. Two Measurement Sensor Fusion 3. Single Variable Kalman Filtering 4. Multi-Variable KF Representations Kalman Filter Localization 11
Fusing Two Measurements Example Given two measurements q 1 and q, how do we fuse them to obtain an estimate q? Assume measurements are modeled as random variables that follow a Gaussian distribution with variance σ 1 and σ respectively 1
Fusing Two Measurements Example (cont ): 13
Fusing Two Measurements Example (cont ): Lets frame the problem as minimizing a weighted least squares cost function: 14
Fusing Two Measurements Example (cont ): If n= and w i = 1/σ i q = q 1 + σ 1 (q - q 1 ) σ 1 + σ 15
Kalman Filter Localization Introduction to Kalman Filters 1. KF Representations. Two Measurement Sensor Fusion 3. Single Variable Kalman Filtering 4. Multi-Variable KF Representations Kalman Filter Localization 16
Single Variable KF Example: Fusing two Measurements q = q 1 + σ 1 (q - q 1 ) σ 1 + σ We can reformulate this in KF notation 17 x t = x t-1 + K t (z t - x t-1 ) K t = σ t-1 σ t-1 + σ z
Single Variable KF KF for a Discrete Time System x t = x t-1 + K t (z t - x t-1 ) K t = σ t-1 σ t-1 + σ z σ t = σ t-1 -K t σ t-1 18 Where x t is the current state estimate σ t is the associated variance z t is the most recent measurement K is the Kalman Gain
Kalman Filter Localization Introduction to Kalman Filters 1. KF Representations. Two Measurement Sensor Fusion 3. Single Variable Kalman Filtering 4. Multi-Variable KF Representations Kalman Filter Localization 19
Representations in KF 0 Multi-variable Gaussian Distribution Symmetrical Uni-modal Characterized by Mean Vector µ Covariance Matrix Σ Properties Propagation of errors Product of Gaussians
Representations in KF Multi-Var. Gaussian Characterization Mean Vector Vector of expected values of n random variables µ = E[X] = [ µ 0 µ 1 µ µ n ] T µ i = x i p(x i ) dx i 1
Representations in KF Multi-Var. Gaussian Characterization Covariance Expected value of the difference from the means squared σ ij =Cov[X i, X j ] = E[(X i µ i ) (X j µ j ) ] Covariance is a measure of how much two random variables change together. Positive σ ij when variable i is above its expected value, then the other variable j tends to also be above its µ j Negative σ ij when variable i is above its expected value, then the other variable j tends to be below its µ j
Representations in KF Multi-Var. Gaussian Characterization Covariance For continuous random variables σ ij = (x i µ i ) (x j µ j ) p(x i, x j ) dx i dx j For discrete set of K samples σ ij = Σ (x i,k µ i )(x j,k µ j )/K K k=1 3
Representations in KF Multi-Var. Gaussian Characterization Covariance Matrix Covariance between each pair of random variables σ 00 σ 01 σ 0n Σ = σ 10 σ 11 σ 1n : 4 s Note: σ ii =σ i σ n0 σ n1 σ nn
Representations in KF Multi variable Gaussian Properties Propagation of Errors 5
Representations in KF Multi variable Gaussian Properties Product of Gaussians 6
Next Apply the Kalman Filter to multiple variables in the form of a KF. 7
Kalman Filter Localization Introduction to Kalman Filters Kalman Filter Localization 1. EKF Localization Overview. EKF Prediction 3. EKF Correction 4. Algorithm Summary 8
Extended Kalman Filter Localization Robot State Representation State vector to be estimated, x e.g. x x = y θ Associated Covariance, P σ xx σ xy σ xθ 9 P = σ yx σ yy σ yθ σ θx σ θy σ θθ
Extended Kalman Filter Localization 1. Robot State Representation 30
Extended Kalman Filter Localization Iterative algorithm 1. Prediction Use a motion model and odometry to predict the state of the robot and its covariance x t P t. Correction - Use a sensor model and measurement to predict the state of the robot and its covariance x t P t 31
Kalman Filter Localization Introduction to Kalman Filters Kalman Filter Localization 1. EKF Localization Overview. EKF Prediction 3. EKF Correction 4. Algorithm Summary 3
EKFL Prediction Step Motion Model Lets use a general form of a motion model as a discrete time equation that predicts the current state of the robot given the previous state x t-1 and the odometry u t x t = f(x t-1, u t ) 33
EKFL Prediction Step Motion model For our differential drive robot x t-1 x t-1 = y t-1 θ t-1 u t = Δs r,t Δs l,t 34
EKFL Prediction Step Motion model And the model we derived x t-1 Δs t cos(θ t-1 +Δθ t / ) x t = f( x t-1, u t ) = y t-1 + Δs t sin(θ t-1 +Δθ t / ) θ t-1 Δθ t 35 Δs t = (Δs r,t +Δs l,t )/ Δθ t = (Δs r,t - Δs l,t )/b
EKFL Prediction Step Covariance Recall, the propagation of error equation 36
EKFL Prediction Step Covariance Our equation f() is not linear, so to use the property we will linearize with first order approximation x t = f( x t-1, u t ) F x,t x t-1 + F u,t u t 37 where F x,t = Derivative of f with respect to state x t-1 F u,t = Derivative of f with respect to control u t
EKFL Prediction Step Covariance Here, we linearize the motion model f to obtain P t = F x,t P t-1 F T x,t + F u,t Q t F T u,t where Q t = Motion Error Covariance Matrix 38 F x,t = Derivative of f with respect to state x t-1 F u,t = Derivative of f with respect to control u t
EKFL Prediction Step Covariance Q t = k Δs r,t 0 0 k Δs l,t F x,t = df/dx t df/dy t df/dθ t F u,t = df/dδs r,t df/dδs l,t 39
EKFL Prediction Step 1. Motion Model x t x t-1 40
Kalman Filter Localization Introduction to Kalman Filters Kalman Filter Localization 1. EKF Localization Overview. EKF Prediction 3. EKF Correction 4. Algorithm Summary 41
EKFL Correction Step Innovation We correct by comparing current measurements z t with what we expect to observe z exp,t given our predicted location in the map M. The amount we correct our state is proportional to the innovation v t v t = z t - z exp,t 4
EKFL Correction Step The Measurement Assume our robot measures the relative location of a wall i extracted as line z i t = αi t r i t Ri t = σi αα,t σi αr,t σi rα,t σi rr,t 43
EKFL Correction Step The Measurement Assume our robot measures the relative location of a wall i extracted as line z i t = αi t =g(ρ 1, ρ,, ρ n, β 1, β,, β n ) r i t β β β β β β 44 β
EKFL Correction Step The Measurement R i t = σi αα,t σi αr,t σ i rα,t σi rr,t = G ρβ,t Σ z,t G ρβ,t T 45 where Σ z,t = Sensor Error Covariance Matrix G ρβ,t = Derivative of g() wrt measurements ρ t, β t
EKFL Correction Step ziexp,t = hi( x t, M) = 46 αim θ t ri x t cos(αim) y t sin(αim)
EKFL Correction Step 47 xcos(α) x y ysin(α)
EKFL Correction Step The covariance associate with the innovation is Σ IN,t = H i x,t P t Hi x,t T + R i t where R i t = Line Measurement Error Covariance Matrix H i x,t = Derivative of h with respect to state x t 48
EKFL Correction Step Final updates Update the state estimate x t = x t + K t v t Update the associated covariance matrix P t = P t K t Σ IN,t K t T Both use the Kalman gain Matrix 49 K t = P t H x,t T ( Σ IN,t ) -1
EKFL Correction Step Compare with single var. KF Update the state estimate x t = x t-1 + K t (z t - x t-1 ) Update the associated covariance matrix σ t = σ t-1 -K t σ t-1 50 Both use the Kalman gain Matrix K t = σ t-1 σ t-1 + σ z
EKFL Correction Step Final updates By fusing the prediction of robot position (magenta) with the innovation gained by the measurements (green) we get the updated estimate of the robot position (red) 51
Kalman Filter Localization Introduction to Kalman Filters Kalman Filter Localization 1. EKF Localization Overview. EKF Prediction 3. EKF Correction 4. Algorithm Summary 5
EKFL Summary Prediction 1. x t = f( x t-1, u t ). P t = F x,t P t-1 F x,t T + F u,t Q t F u,t T Correction 3. z i exp,t = hi ( x t, M) 4. v t = z t - z exp,t 5. Σ IN,t = H i x,t P t Hi x,t T + R i t 53 6. x t = x t + K t v t 7. P t = P t K t Σ IN,t K t T 8. K t = P t H x,t T ( Σ IN,t ) -1
EKFL Example 54 http://www.youtube.com/watch?v=8mywutacal4