Robot Localization and Kalman Filters

Similar documents
Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

2D Image Processing. Bayes filter implementation: Kalman filter

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Autonomous Navigation for Flying Robots

2D Image Processing. Bayes filter implementation: Kalman filter

Introduction to Mobile Robotics Probabilistic Robotics

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

1 Kalman Filter Introduction

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Probabilistic Fundamentals in Robotics

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Autonomous Mobile Robot Design

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

CS 532: 3D Computer Vision 6 th Set of Notes

Partially Observable Markov Decision Processes (POMDPs)

PROBABILISTIC REASONING OVER TIME

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Partially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

Robot Localisation. Henrik I. Christensen. January 12, 2007

Mobile Robot Localization

Sensors Fusion for Mobile Robotics localization. M. De Cecco - Robotics Perception and Action

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Efficient Monitoring for Planetary Rovers

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

AUTOMOTIVE ENVIRONMENT SENSORS

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

Gaussians. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Content.

2D Image Processing (Extended) Kalman and particle filter

Lecture 7: Optimal Smoothing

Recursive Bayes Filtering

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Mobile Robot Localization

Linear Dynamical Systems

From Bayes to Extended Kalman Filter

Introduction to Unscented Kalman Filter

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Introduction to Mobile Robotics Bayes Filter Kalman Filter

EnKF-based particle filters

Lego NXT: Navigation and localization using infrared distance sensors and Extended Kalman Filter. Miguel Pinto, A. Paulo Moreira, Aníbal Matos

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Robotics 2 Target Tracking. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

Lecture 6: Bayesian Inference in SDE Models

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Mobile Robots Localization

System identification and sensor fusion in dynamical systems. Thomas Schön Division of Systems and Control, Uppsala University, Sweden.

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Linear Dynamical Systems (Kalman filter)

Localización Dinámica de Robots Móviles Basada en Filtrado de Kalman y Triangulación

CSE 473: Artificial Intelligence

Adaptive ensemble Kalman filtering of nonlinear systems

Overfitting, Bias / Variance Analysis

A Stochastic Online Sensor Scheduler for Remote State Estimation with Time-out Condition

The Kalman Filter ImPr Talk

Probabilistic Robotics

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

CSE 473: Artificial Intelligence Spring 2014

Nonlinear Optimization for Optimal Control Part 2. Pieter Abbeel UC Berkeley EECS. From linear to nonlinear Model-predictive control (MPC) POMDPs

STATISTICAL ORBIT DETERMINATION

Consistent Triangulation for Mobile Robot Localization Using Discontinuous Angular Measurements

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

c 2014 Krishna Kalyan Medarametla

State Estimation of Linear and Nonlinear Dynamic Systems

The Belief Roadmap: Efficient Planning in Belief Space by Factoring the Covariance. Samuel Prentice and Nicholas Roy Presentation by Elaine Short

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING. Non-linear regression techniques Part - II

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Probabilistic Graphical Models

Probabilistic Graphical Models

Sensor Tasking and Control

CS491/691: Introduction to Aerial Robotics

Data assimilation with and without a model

Lecture 22: Graph-SLAM (2)

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Data assimilation in high dimensions

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Local Ensemble Transform Kalman Filter

Statistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes

(Extended) Kalman Filter

TSRT14: Sensor Fusion Lecture 8

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

COMP417 Course review

Transcription:

Robot Localization and Kalman Filters Rudy Negenborn rudy@negenborn.net August 26, 2003

Outline Robot Localization Probabilistic Localization Kalman Filters Kalman Localization Kalman Localization with Landmarks 2

Robot Localization Localization a key problem Available location information Relative Measurements Driving: wheel encoders, accerelometers,, gyroscopes Frequent,, but increasing error Absolute Measurements Sensing: : GPS, vision, laser, landmarks Less frequent,, but bounded error 3

Probabilistic Localization Probabilistic approach Consider whole space of locations Belief Bel( x k ) = P( x k d 1,..., d k ) Get belief as close to real distribution as possible Prior Belief Bel - ( x k ) = P( x k z 1, a 1,..., z k-1, a k-1 ) Posterior Belief Bel + ( x k ) = P( x k z 1, a 1,..., z k-1, a k-1, z k ) 4

Probabilistic Localization Localization equations: Bel - ( x k ) = P( x k z 1, a 1,..., z k-1, a k-1 ) = P( x k a k-1, x k-1 ) Bel + ( x k-1 ) dx k-1 Bel + ( x k ) = P( x k z 1, a 1,, z k-1, a k-1, z k ) = P( z k x k ) Bel - ( x k ) P( z k z 1, a 1,, z k-1, a k-1 ) Markov Assumption Implementation Issues: Motion model: P( x k a Measurement model: P( Representation of belief a k-1, x k-1 ) P( z k x k ) 5

Kalman Filters Representation of belief Gaussian function Mean and (co)variance( Initial belief: Bel( ( x 0 ) = N( x 0, P 0 ) Motion model x k = Ax k-1 + Ba k-1 + w k-1, where w k N( 0, Q k ) P( x k a k-1, x k-1 ) = N( Ax k-1, Q k ) Measurement model z k = Hx k + v k, where v k N( 0, R k ) P( z k x k ) = N( Hx k, R k ) 6

Kalman Filters Representation of belief Gaussian function Mean and (co)variance( Initial belief: Bel( ( x 0 ) = N( x 0, P 0 ) Motion model x k = Ax k-1 + Ba k-1 + w k-1, where w k N( 0, Q k ) P( x k a k-1, x k-1 ) = N( AxA k-1, Q k ) Measurement model z k = Hx k + v k, where v k N( 0, R k ) P( z k x k ) = N( Hx k, R k ) 7

Kalman Filters Prior belief: Bel - ( x k ) = N( ^x - k, Pk - ) Prior localization estimate: Prior uncertainty: Posterior belief: : Bel + ( x k ) = N( ^x + k, Pk + ) Posterior localization estimate: Posterior uncertainty: 8

Kalman Filters Prior belief: Bel - ( x k ) = N( ^x - k, Pk - ) Prior location estimate: Prior uncertainty: P- k Posterior belief: : Bel + ( x k ) = N( ^x + k, Pk + ) ^x k - Posterior location estimate: Posterior uncertainty: P+ k ^x k + 9

Kalman Filters Prior belief: Bel - ( x k ) = N( ^x - k, Pk - ) ^x - k = A A ^x + k-1 + B B ^a k-1 P- = A A + A T + B B B T k + Q k-1 P k-1 U k-1 10

Kalman Filters Prior belief: Bel - ( x k ) = N( ^x - k, Pk - ) ^x - k = A A ^x + k-1 + B B ^a k-1 Prior location estimate Posterior location estimate Last relative measurement P- = A A + A T + B B B T k + Q k-1 P k-1 U k-1 11

Kalman Filters Prior belief: Bel - ( x k ) = N( ^x - k, Pk - ) ^x - k = A A ^x + k-1 + B B ^a k-1 Prior location estimate Posterior location estimate Last relative measurement P- = A A + A T + B B B T k + Q k-1 Prior uncertainty P k-1 Posterior uncertainty U k-1 Relative measurement uncertainty Motion uncertainty 12

Kalman Filters Posterior belief: Bel + ( x k ) = N( ^x + k, Pk + ) ^x + k = ^x - k + K k ( z k H ^x - k ) K k = P- HT ( H P- H T + R k ) 1 k k P+ k = ( I K k H ) P- k 13

Kalman Filters Posterior belief: Bel + ( x k ) = N( ^x + k, Pk + ) ^x + k = ^x - k + K k ( z k H ^x - k ) Residual Posterior state estimate Prior state estimate Kalman Gain True measurement Measurement prediction K k = P- HT ( H P- H T + R k ) 1 k k P+ k = ( I K k H ) P- k 14

Kalman Filters Posterior belief: Bel + ( x k ) = N( ^x + k, Pk + ) ^x + k = ^x - k + K k ( z k H ^x - k ) Residual Posterior state estimate Prior state estimate Kalman Gain True measurement Measurement prediction K k = P- HT ( H P- H T + R k ) 1 k k P+ k = ( I K k H ) P- k Measurement residual uncertainty 15

Kalman Gain 1 0.9 0.8 0.7 0.6 K 0.5 0.4 0.3 0.2 0.1 0 0 5 10 measurement noise 15 20 25 0 5 10 25 20 15 prior state uncertainty 16

Kalman Filters Posterior belief: Bel + ( x k ) = N( ^x + k, Pk + ) ^x + k = ^x - k + K k ( z k H ^x - k ) Residual Posterior state estimate Prior state estimat Kalman Gain True measurement Measurement prediction K k = P- HT ( H P- H T + R k ) 1 k k P+ k = ( I K k H ) P- k Measurement residual uncertainty 17

Extended Kalman Filter Nonlinear motion and measurement models Linearization around estimated trajectory Partial derivatives of nonlinear model for A, B, H Close to linear over uncertainty region Drawbacks Evaluation at every time step Linearization errors 18

Kalman Localization Localization instances Position Tracking Initial belief with peak at true initial location Global Localization Initial uniform belief Kidnapped Robot Initial belief with peak far from true location 19

Position Tracking 700 600 500 400 300 y 200 100 0 S 100 200 100 0 100 200 300 400 500 600 700 800 x 20

Position Tracking 400 200 value 0 200 400 0 500 1000 1500 2000 2500 3000 400 200 step value 0 200 400 0 500 1000 1500 2000 2500 3000 2 1 step value 0 1 2 0 500 1000 1500 2000 2500 3000 step 21

Position Tracking 102 100 1 1 2 2 3 4 5 6 7 8 9 10 98 3 4 5 96 6 7 y 94 8 9 10 10 11 12 13 14 15 16 17 18 19 20 92 11 10 14 15 12 13 16 18 90 17 20 19 20 20 88 100 150 200 250 300 350 x 22

Infrequent Measurements 20 prior and posterior uncertainty in state element 1 (co)variance 15 10 5 0 0 10 20 30 40 50 60 70 80 90 100 prior and posterior uncertainty step in state element 5 150 (co)variance 100 50 0 0 10 20 30 40 50 60 70 80 90 100 prior and posterior uncertainty step in state element 9 0.015 (co)variance 0.01 0.005 0 0 10 20 30 40 50 60 70 80 90 100 step 23

Kalman Localization with Landmarks Uniquely identifiable landmarks 1:1 correspondence Type identifiable landmarks 1:n correspondence Kalman Filter framework extension Multiple state beliefs Probability for each belief 24

Type Identifiable Landmarks true x,y trajectory with measurements 240 220 lm=1 lm=2 lm=3 lm=4 lm=5 200 180 y(cm) 160 140 120 100 S 200 400 600 800 1000 1200 1400 1600 1800 2000 x(cm) 25

Type Identifiable Landmarks x 10 3 6 5 4 3 2 1 0 130 500 120 1000 1500 90 100 110 2000 80 26

Type Identifiable Landmarks 0.01 0.008 0.006 0.004 0.002 0 130 500 120 1000 1500 90 100 110 2000 80 27

Type Identifiable Landmarks 0.025 0.02 0.015 0.01 0.005 0 130 500 120 1000 1500 90 100 110 2000 80 28

Type Identifiable Landmarks 110 true x,y trajectory with measurements 105 1 2 3 4 5 100 y(cm) 95 90 200 400 600 800 1000 1200 1400 1600 1800 2000 x(cm) 29

Summary & Future Summary Describing theory of localization and Kalman Filters Illustrating applications of Kalman Filter to localization problems Extension of Kalman Filter framework to multiple beliefs Future work Practical application to robots Possibilities of Kalman Filter extension Website: http://www.negenborn.net/kal_loc www.negenborn.net/kal_loc/ 30

The end. 31