1 Vlad Estivill-Castro Robots for People --- A project for intelligent integrated systems
V. Estivill-Castro 2 Probabilistic Map-based Localization (Kalman Filter) Chapter 5 (textbook)
Based on textbook Introduction to Autonomous Mobile Robots second edition Roland Siegwart, Illah R. Nourbakhsh, and Davide Scaramuzza Michael Williams Introduction to Kalman Filters V. Estivill-Castro 3
Belief In one dimension: V. Estivill-Castro 4
Example 4: Gaussian distribution Famous (analytically tractable) bell-shaped curved defined by the formula N(µ,σ)(x)= e (x-µ)/2σ^2 / 2πσ where µ is the mean value and σ is the standard deviation V. Estivill-Castro 5
Action and perception updates In robot localization, we distinguish two update steps 1. Action update the robot moves and estimates its position through its propioceptive sensors (motion model). x During this step, the robot uncertainty grows 1 x 2 x 3 2. Perception update Robots belief before the robot makes an observation using Probability exteroceptive of making the observation sensors and corrects its position by this opportunely observation combining its belief before the observation Updated belief with the probability of making exactly (fusion) that observation. During this step, the robot s uncertainty x 2 shrinks. x 2 x 2 V. Estivill-Castro 6
Kalman filter Continuous, recursive and very compact solution to probabilistic position estimation Assumes normal distribution is the suitable sensor (and motion) model V. Estivill-Castro 7
In general The solution to the probabilistic localization problem method to compute the probability distribution of the configuration during each ACTION step and PERCEPTION step V. Estivill-Castro 8
Solution to the probabilistic localization --- ingredients 1. The initial probability distribution Prob(x) t=0 2. A motion model: The statistical error model of the propioceptive sensors (e.g. wheels encoders). 3. A sensor model: The statistical error model of the exteroceptive sensors (e.g. laser, sonar, camera) 4. Map of the environment V. Estivill-Castro 9
Solution to the probabilistic localization --- ingredients 1. The initial probability distribution Prob(x) t=0 2. A motion model: The statistical error model of the propioceptive sensors (e.g. wheels encoders). 3. A sensor model: The statistical error model of the exteroceptive sensors (e.g. laser, sonar, camera) 4. Map of the environment V. Estivill-Castro 10
Markov versus Kalman localization Kalman The probability distribution of both, the robot configuration and the sensor model is assumed to be continuous and Gaussian Since a Gaussian distribution is described by the mean value and the variance we only update the parameters. Computational cost is low Markov The configuration space is divided into many cells The probability distribution of the sensor model is also discrete During the action and Perception, all the cells are updated V. Estivill-Castro 11
The Problem System Error Sources Black Box External Controls System Measuring Devices System State (desired but not known) Observed Measurements Estimator Optimal Estimate of System State Measurement Error Sources System state cannot be measured directly Need to estimate optimally from measurements 12
Kalman Filter Localization System error source Control System System state (desired but not known) Measuring Devices Observed measurement Optimal estimate of system state Kalman filter Measurement error source V. Estivill-Castro 13
What is a Kalman Filter? Recursive data processing algorithm Generates optimal estimate of desired quantities given the set of measurements Optimal? For linear system and white Gaussian errors, Kalman filter is best estimate based on all previous measurements For non-linear system optimality is qualified Recursive? Doesn t need to store all previous measurements and reprocess all data each time step 14
Conceptual Overview Simple example to motivate the workings of the Kalman Filter Theoretical Justification to come later for now just focus on the concept Important: Prediction and Correction 15
Conceptual Overview (1D) y Lost on the 1-dimensional line Position : y(t) Assume Gaussian distributed measurements 16
Conceptual Overview 0.16 0.14 0.12 0.1 0.08 LEARN from a sensor (our initial state known) 0.06 0.04 0.02 0 0 10 20 30 40 50 60 70 80 90 100 Sextant Measurement at t 1 : Mean = z 1 and Variance = s z1 Optimal estimate of position is: ŷ(t 1 ) = z 1 Variance of error in estimate: s 2 x (t 1 ) = s 2 z1 Boat in same position at time t 2 - Predicted position is z 1 17
Conceptual Overview 0.16 0.14 0.12 0.1 MOVEMENT KNOWN 0.08 0.06 0.04 0.02 0 0 10 20 30 40 50 60 70 80 90 100 Sextant Measurement at t 1 : Mean = z 1 and Variance = s z1 Optimal estimate of position is: ŷ(t 1 ) = z 1 Variance of error in estimate: s 2 x (t 1 ) = s 2 z1 Boat in same position at time t 2 - Predicted position is z 1 18
Conceptual Overview 0.16 0.14 NEXT Sensor reading prediction ŷ - (t 2 ) 0.12 0.1 measurement z(t 2 ) 0.08 0.06 0.04 0.02 0 0 10 20 30 40 50 60 70 80 90 100 So we have the prediction ŷ - (t 2 ) GPS Measurement at t 2 : Mean = z 2 and Variance = s 2 z2 Need to correct the prediction due to measurement to get ŷ(t 2 ) Closer to more trusted measurement linear interpolation? 19
Conceptual Overview prediction ŷ - (t 2 ) 0.16 0.14 0.12 corrected optimal estimate ŷ(t 2 ) 0.1 0.08 measurement z(t 2 ) 0.06 0.04 0.02 0 0 10 20 30 40 50 60 70 80 90 100 Corrected mean is the new optimal estimate of position New variance is smaller than either of the previous two variances 20
Conceptual Overview Lessons so far (not moving): Make prediction based on previous data : ŷ -, s - Take measurement : z k, s z Optimal estimate (ŷ) = Prediction + (Kalman Gain) * (Measurement - Prediction) Variance of estimate = Variance of prediction * (1 Kalman Gain) 21
Conceptual Overview 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 ŷ(t 2 ) Naïve Prediction ŷ - (t 3 ) Motion Model: We move with constant velocity (at least between sensor readings) 0 0 10 20 30 40 50 60 70 80 90 100 At time t 3, boat moves with velocity dy/dt=u Naïve approach: Shift probability to the right to predict This would work if we knew the velocity exactly (perfect model) 22
Conceptual Overview 0.16 0.14 ŷ(t 2 ) Naïve Prediction ŷ - (t 3 ) 0.12 0.1 0.08 0.06 Prediction ŷ - (t 3 ) 0.04 0.02 0 0 10 20 30 40 50 60 70 80 90 100 Better to assume imperfect model by adding Gaussian noise dy/dt = u + w Distribution for prediction moves and spreads out 23
Conceptual Overview 0.16 0.14 Corrected optimal estimate ŷ(t 3 ) 0.12 0.1 Measurement z(t 3 ) 0.08 0.06 0.04 Prediction ŷ - (t 3 ) 0.02 0 0 10 20 30 40 50 60 70 80 90 100 Now we take a measurement at t 3 Need to once again correct the prediction Same as before 24
Conceptual Overview Lessons learnt from conceptual overview: Initial conditions (ŷ k-1 and s k-1 ) Prediction (ŷ - k, s - k) Use initial conditions and motion model (eg. constant velocity) to make prediction Measurement (z k ) Take measurement Correction (ŷ k, s k ) Use measurement to correct prediction by blending prediction and residual always a case of merging only two Gaussians Optimal estimate with smaller variance 25
Introduction to Kalman filter What is the best measurement if I have two measurements? Two measurements q 1 = q 1 with variance σ 1 2 q 2 = q 2 with variance σ 2 2 Weighted least-squares S (q*) =Σ i=1n w i (q*-q i ) 2 Finding the minimum error δs/δq* = 2Σ i=1n w i (q*-q i )=0 (some what complex) algebraic manipulations show q* = q 1 + σ 1 2 (q 2 q 1 ) / (σ 1 2 +σ 22 ) V. Estivill-Castro 26
In Kalman filter notation Static re-evaluation of sensor value We are not moving! x k+1 * = x k * + K k+1 (z k+1 -x k *) K k+1 =σ k 2 / (σ k 2 +σ z2 ) σ 2 = k σ 2 1 σ 2 = z σ 2 2 σ k+1 2 = - K k+1 σ k 2 V. Estivill-Castro 27
Kalman filter: Dynamic prediction u=velocity, w=noise dx/dt = u+w Motion x k * = x k * + u(t k+1 -t k ) σ k 2 = σ k 2 + σ w 2 (t k+1 -t k ) Combining fusion and dynamic prediction x k+1 * = x k * + K k+1 (z k+1 -x k *) =[x k * + u(t k+1 -t k )] + K k+1 (z k+1 - x k * - u(t k+1 -t k )) K k+1 =σ k 2 / (σ k 2 +σ z2 ) = [σ k 2 - σ w 2 (t k+1 -t k )]/[σ k 2 + σ w 2 (t k+1 -t k )+σ k2 ] V. Estivill-Castro 28
Action and perception updates In robot localization, we distinguish two update steps 1. Action update the robot moves and estimates its position through its propioceptive sensors (motion model). x During this step, the robot uncertainty grows 1 x 2 x 3 2. Perception update Robots belief before the robot makes an observation using Probability exteroceptive of making the observation sensors and corrects its position by this opportunely observation combining its belief before the observation Updated belief with the probability of making exactly (fusion) that observation. During this step, the robot s uncertainty x 2 shrinks. x 2 x 2 V. Estivill-Castro 29
Derived by Bayes Theorem Sketch of the 1D Case in textbook The new belief is the product of two beliefs The predicted belief The sensed belief Both are represented as Gaussians, and the product is a Gaussian With the update being the Kalman rule V. Estivill-Castro 30
The five steps of Map-Based Localization 1. Prediction based on previous estimate and odometry 2. Observation with on-board sensors 3. Measurement prediction based on prediction and map 4. Matching of observation and map 5. Estimation -> position update (posteriori position) V. Estivill-Castro 31
Extended Kalman Filer We know motion model is not linear But we can take the first order Taylor approximation (once more) It turns our the equations are almost the same V. Estivill-Castro 32
The (Extended) Kalman filter In the first step. the robot position x t * at time step t is predicted based on its old location (time step t-1) and its movement due to the control input u t : x t * = f (x t-1, u t ) P t *= F x P t-1 F xt + F u Q t F u T Jacobean of f : F x F u f: odometry function Covariance of previous state:p t-1 Covariance of noise associated to the motion: Q t V. Estivill-Castro 33
Odometry in the robots frame of reference b is the separation between the wheels Kinematics: V. Estivill-Castro 34
We still have to make some assumptions For the position Σ p initially can be all zeros Estimate an (initial) covariance matrix Q t = Σ Δ = k r Δs r 0 0 k l Δs rl The two errors from the wheels are independent The variance of the errors in the wheels is proportional to the absolute value of the travel distance The constants k l k r determined experimentally V. Estivill-Castro 35
Odometry in the robots frame of reference b is the separation between the wheels (Motion) Error Model: V. Estivill-Castro 36
The Generic Problem (several dimensions/ state vector) System Error Sources Black Box External Controls System Measuring Devices System State (desired but not known) Observed Measurements Estimator Optimal Estimate of System State Measurement Error Sources System state cannot be measured directly Need to estimate optimally from measurements 37
Theoretical Basis Process to be estimated: y k = Ay k-1 + Bu k + w k-1 z k = Hy k + v k Process Noise (w) with covariance Q Measurement Noise (v) with covariance R Kalman Filter Predicted: ŷ - k is estimate based on measurements at previous time-steps ŷ - k = Ay k-1 + Bu k Motion Model P - k = AP k-1 A T + Q Corrected: ŷ k has additional information the measurement at time k ŷ k = ŷ - k + K(z k - H ŷ - k ) P k = (I - KH)P - k K = P - kh T (HP - kh T + R) -1 38
Blending Factor If we are sure about measurements: Measurement error covariance (R) decreases to zero K decreases and weights residual more heavily than prediction If we are sure about prediction Prediction error covariance P - k decreases to zero K increases and weights prediction more heavily than residual 39
Theoretical Basis Prediction (Time Update) (1) Project the state ahead Correction (Measurement Update) (1) Compute the Kalman Gain K = P - kh T (HP - kh T + R) -1 ŷ - k = Ay k-1 + Bu k (2) Project the error covariance ahead P - k = AP k-1 A T + Q (2) Update estimate with measurement z k ŷ k = ŷ - k + K(z k - H ŷ - k ) (3) Update Error Covariance P k = (I - KH)P - k 40
Quick Example Constant Model System Error Sources Black Box External Controls System System State Measuring Devices Observed Measurements Estimator Optimal Estimate of System State Measurement Error Sources 41
Quick Example Constant Model Prediction ŷ - k = y k-1 P - k = P k-1 Correction K = P - k(p - k + R) -1 ŷ k = ŷ - k + K(z k - H ŷ - k ) P k = (I - K)P - k 42
Quick Example Constant Model 0-0.1-0.2-0.3-0.4-0.5-0.6-0.7 0 10 20 30 40 50 60 70 80 90 100 43
Quick Example Constant Model 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 10 20 30 40 50 60 70 80 90 100 Convergence of Error Covariance - P k 44
Quick Example Constant Model Larger value of R the measurement error covariance (indicates poorer quality of measurements) 0-0.1 Filter slower to believe measurements slower convergence -0.2-0.3-0.4-0.5-0.6-0.7 0 10 20 30 40 50 60 70 80 90 100 45
Kalman filter illustration Recall motion model for differential twowheel robot V. Estivill-Castro 46
Prediction update P t *= F x P t-1 F xt + F u Q t F u T V. Estivill-Castro 47
Observation The second step is to obtain the observation Z (measurement) from the robot s sensors at the new location. The observation usually consists of a set n 0 of single observation z j extracted from the different sensor signals. It represents features like lines, doors or any kind of landmark The parameters of the targets are usually observed in the sensor/robot frame Therefore, the observations have to be transformed to the world frame {W} or the measurement prediction has to be transformed to the sensor frame {R} This transformation is specified in the function h j (see later). V. Estivill-Castro 48
Observation Raw Data of Laser Scanner Extracted Lines Extracted Lines in Model Space V. Estivill-Castro 49
Measurement Prediction In the next step, we use the predicted robot position x t * and the features m j in the map M to generate multiple predicted observations z tj * They have to be transformed into the sensor frame z tj *= h j (x t *, m j ) We can now define the measurement prediction as a SET Z* containing all n j predicted observations Z*= { z tj * 1 j n j } The function h j is mainly the coordinate transformation between the world frame {W} and the sensor/robot frame {R} V. Estivill-Castro 50
Measurement Prediction For prediction, only the walls that are in the field of view of the robot are selected This is done by linking the individual lines to the nodes of the path V. Estivill-Castro 51
An example: The generated measurement predictions have to be transformed to the robot frame {R} m j = {W} [ α tj r tj ] è z tj *=[α j* t r j* t ] According to the figure in the previous slide, the transformation is given by z tj *=[α t j* r t j* ] =h j (x t *, m j )=[ {W} α t j* -θ t * {W} r tj x t * cos( {W} α t j* ) + y t *sin( {W} α t j* ) ] and its Jacobian by H =[ δα tj /δx* δα tj /δy* δα tj /δθ δr tj /δx* δr tj /δy* δr tj /δθ ] H =[ 0 0-1 cos( {W} α t j* ) sin( {W} α t j* ) 0 ] V. Estivill-Castro 52
Matching Assignment from observations m (gained by the sensors) to the targets z t (stored in the map) For each measurement prediction for which a corresponding observation is found we calculate the innovation and this innovation covariance found by applying the error propagation law: Σ INt ij = H j P t * H jt + R t i (H j : Jacobian) The validity of the correspondence between measurement and prediction can be evaluated though the Mahalanobis distance v t ijt (Σ INt ij ) -1 v t ij g 2 V. Estivill-Castro 53
Matching V. Estivill-Castro 54
Estimation Applying the Kalman filter Kalman filter gain: K t = P t * H tt (Σ INt ) -1 Update of robot s position estimate x t = x t * + K t v t The associate variance P t = P t * - K t Σ INt K t T V. Estivill-Castro 55
Estimation Kalman filter estimation of the new robot position x t by fusing the prediction of the robot position (magenta) with the innovation gained by the measurement (green) we get the updated estimate of the robot red) V. Estivill-Castro 56
Summary Probabilistic-based Location is a general method that includes Kalman filter Markov-based filter Particle-filter is a special case of Markov filter The main difference is the representation of the belief As a result, the Action update Sensor update may be different V. Estivill-Castro 57
V. Estivill-Castro 58