Probabilistic Fundamentals in Robotics

Similar documents
Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

2D Image Processing (Extended) Kalman and particle filter

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Mobile Robot Localization

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Recursive Bayes Filtering

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Introduction to Mobile Robotics Probabilistic Sensor Models

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Mobile Robot Localization

Simultaneous Localization and Mapping

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Robot Localisation. Henrik I. Christensen. January 12, 2007

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

CSE 473: Artificial Intelligence

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Bayesian Methods / G.D. Hager S. Leonard

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Sensor Tasking and Control

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

Robot Localization and Kalman Filters

Autonomous Mobile Robot Design

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

2D Image Processing. Bayes filter implementation: Kalman filter

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

Introduction to Machine Learning

Autonomous Mobile Robot Design

2D Image Processing. Bayes filter implementation: Kalman filter

Linear Dynamical Systems

AUTOMOTIVE ENVIRONMENT SENSORS

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Partially Observable Markov Decision Processes (POMDPs)

From Bayes to Extended Kalman Filter

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Mathematical Formulation of Our Example

Efficient Monitoring for Planetary Rovers

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

Chapter 9: Bayesian Methods. CS 336/436 Gregory D. Hager

Probabilistic Graphical Models

Target Tracking and Classification using Collaborative Sensor Networks

Probabilistic Graphical Models

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras

TSRT14: Sensor Fusion Lecture 8

Robotics 2 Data Association. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

Variable Resolution Particle Filter

Approximate Inference

Content.

Rao-Blackwellized Particle Filter for Multiple Target Tracking

Self Adaptive Particle Filter

CSEP 573: Artificial Intelligence

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Introduction to Mobile Robotics Probabilistic Robotics

Introduction to Mobile Robotics Probabilistic Motion Models

E190Q Lecture 10 Autonomous Robot Navigation

CS491/691: Introduction to Aerial Robotics

Probabilistic Graphical Models

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters

Using the Kalman Filter for SLAM AIMS 2015

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Mobile Robots Localization

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Probabilistic Robotics

13: Variational inference II

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Introduction to Mobile Robotics

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

PROBABILISTIC REASONING OVER TIME

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Machine Learning Techniques for Computer Vision

Bayesian Methods for Machine Learning

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Stochastic Spectral Approaches to Bayesian Inference

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Sensor Fusion: Particle Filter

Simultaneous Localization and Mapping Techniques

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles

1 Kalman Filter Introduction

Bayesian Inference and MCMC

MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning

Post-exam 2 practice questions 18.05, Spring 2014

Lecture 15. Probabilistic Models on Graph

Probabilistic Graphical Models

Answers and expectations

Density Propagation for Continuous Temporal Chains Generative and Discriminative Models

Transcription:

Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robot localization Basilio Bona DAUIN Politecnico di Torino June 2011 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot localization problem Robotic mapping Probabilistic planning and control Reference textbook Thrun, Burgard, Fox, Probabilistic Robotics, MIT Press, 2006 http://www.probabilistic robotics.org/ Basilio Bona June 2011 2 1

Mobile robot localization problem Mobile robot localization: Markov and Gaussian Introduction Markov localization EKF localization Multi hypothesis tracking UKF localization Mobile robot localization: Grid and Monte Carlo Grid localization Monte Carlo localization Localization in dynamic environments Basilio Bona June 2011 3 Introduction Localization or position estimation is the problem of determining the pose of a rover relative to a given map of the environment In other words, localization is the process of establishing the correspondencebet between eenthe mapreference frameandthe and local reference frame of the rover Hence coordinate transformation is necessary: from robot centered local coordinate frame to global reference frame The pose x t =(x y θ) of the robot is not known and cannot be measured directly with a sensor The pose must only be inferred from environment data (and odometry) A single measurement is not sufficient to recover the pose Localization techniques depend on the type of map used The algorithms presented here are variants of the basic Bayes Filter Basilio Bona June 2011 4 2

Example: localization with known landmarks Basilio Bona June 2011 5 Localization as a dynamic state system A graphical state description of the localization problem The map m, the measurements z t and the controls u t are known Basilio Bona June 2011 6 3

Type of maps A 2D metric map A topological map (graph like) An occupancy grid map A mosaic map of a ceiling Basilio Bona June 2011 7 A taxonomy of localization problems Position tracking The initial robot pose is known; the initial error is assumed small; the pose uncertainty is often assumed to be unimodal; the problem is essentially a local lone Global localization The initial robot pose in unknown; the initial error cannot be assumed to be small; global localization includes position tracking Kidnapped robot problem During operation the robot get kidnapped d (disappears from the environment) and suddenly reappears in another location; the robot believes to be able to estimate its position, but this is not always true Basilio Bona June 2011 8 4

Static vs dynamic environments Static environments Defined as environments where the only moving object is the robot. All other objects (features, landmarks, etc.) are fixed wrt the map Dynamic environments Other objects move, i.e., change their location over time. Changes usually are not episodic, i.e., they persist over time and are recognized by sensor readings. Changes that affect only a single or few measurements are treated as noise. Examples are: people moving around the robot, doors that open and close, movable furniture, daylight illumination (for robot with vision sensors) Two approaches: moving object are included into the state Sensor data are filtered to eliminate the effect of unmodeled dynamics Basilio Bona June 2011 9 Passive vs active localization Passive localization The localization module only observes the robot operating; the localization results do not control the robot motion. Motion is random or determined by other task accomplishment Active localization The localization module controls the robot motion and acts to maximize a performance criterion, e.g., on localization error Basilio Bona June 2011 10 5

Single robot vs multi robot Single robot The localization is performed by a single robot that moves (actively or passively) in the environment. No communication issues are present (except that with the supervisor) Multi robot The localization is jointly performed by a team or robots. In principle each one can perform single robot localization, but if they can detect each other, their information can be shared and each robot belief can influence other robots belief. Detection and communication issues are important, as well as relative pose measurement. Basilio Bona June 2011 11 Markov localization Probabilistic localization algorithms are modification of the Bayes filter The simplest localization algorithm (pure Bayesian filter) is called Markov localizationation Prediction bel( x ) = p( x x, u ) bel( x )dx t t t 1 t t 1 t 1 Correction bel( x ) = η p( z x ) bel( x ) t t t t Basilio Bona June 2011 12 6

Initial belief Position tracking 1 1 2 1 bel( x ) det(2 π ) T = Σ exp ( x x ) Σ ( x x ) 0 0 0 0 0 2 Nxx (;, Σ) 0 Normal distribution Nxμ (;, Σ) bel( x ) = Global localization 0 1 X Volume of the poses space Basilio Bona June 2011 13 KF summary x = Ax + Bu + ε t t t 1 t t t ( ) px ( x, u ) = N x ; Ax + Bu, R t t 1 t t t t 1 t t t bel( x ) = p( x x, u ) bel( x )dx t t t 1 t t 1 t 1 ( ; +, ) ( ;, Σ ) ~ N x Ax Bu R ~ N x μ t t t 1 t t t t 1 t 1 t 1 Basilio Bona June 2011 14 7

KF summary bel( x ) = p( x x, u ) bel( x ) dx t t t 1 t t 1 t 1 ( ; + B u, R ) N ( x ; μ, Σ ) ~ N x Ax B ~ N x, t t t 1 t t t t 1 t 1 t 1 1 T 1 bel( x ) = η exp ( x Ax B u ) R ( x Ax B u ) t t t t 1 t t t t t t 1 t t 2 1 T 1 exp ( x μ ) Σ ( x μ ) dx t 1 t 1 t 1 t 1 t 1 t 1 2 μ = Aμ t t t + Bu 1 t t bel( x ) = t T Σ t = AΣ A + R t t 1 t t Basilio Bona June 2011 15 KF summary z = C x + δ t t t t ( ) pz ( x ) = N z ; Cx, Q t t t t t t bel( x ) = η p( z x ) bel( x ) t t t t ( ;, ) N ( x ; μ, t t t t t t t Σ ) ~ N z ; C x Q ~ Basilio Bona June 2011 16 8

KF summary bel( x ) = η p( z x ) bel( x ) t t t t ( ;, Q ) N ( x ; μ, Σ ) ~ N z C x ~ ; t t t t t t t 1 belx = η z Cx Q z Cx 2 1 T 1 exp ( x μ ) Σ ( x μ ) t t t t t 2 T 1 ( ) exp ( ) ( ) t t t t t t t t μ = μ + K ( z C μ ) t t t t t t bel( x ) = with K =ΣC Σ = ( I KC ) Σ t t t t T t t t t T ( C Σ C + Q ) t t t t 1 Basilio Bona June 2011 17 EKF localization We assume that the map is represented by a collection of features (landmark based localization) We assume the velocity model and the measurement model dlpresented in previous slides All features are uniquely identifiable (correspondence variables are known) = known landmarks p(z t x t,c t,m) Range and heading measurements are available Door 1 Door 2 Door 3 One dimensional environment with three landmarks (doors) Correspondence is known Basilio Bona June 2011 18 9

Initial belief (uniform) Measurement model Correction/update step Motion model (prediction) Measurement model Correction/update step Motion model (prediction) Basilio Bona June 2011 19 EKF_known_correspondence_localization (,, u, z, c, m) μt Σ 1 t 1 t t t Prediction x x x μ μ μ t 1, x t 1, y t 1, θ gu (, μ ) t t 1 y y y 1: G = = t x μ μ μ Jacobian of g wrt location t 1 t 1, x t 1, y t 1, θ θ θ θ μ μ μ t 1, x t 1, y t 1, θ x x v ω (, ) t t gu μ t t 1 y y 2: V = = t Jacobian of g wrt control u v ω t t t θ θ v ω t t 2 ( α v 1 2 ) 0 t + α ωt 3: M = t 2 Motion noise 0 ( α v α ω + 3 t 4 t ) 4: μ = gu (, μ ) t t t 1 Predicted mean 5: Σ = GΣ G T + VMV T t t t 1 t t t t Predicted covariance Basilio Bona June 2011 20 10

EKF_known_correspondence_localization (,, u, z, c, m) μt Σ 1 t 1 t t t Correction 2 2 ( m μ, ) + 1: ˆ x t x ( m μ y t, y) z = t ( m μ, m μ y t, y x t, x) μ atan2 t, θ Predicted measurement mean 2: r r r t t t h( μ, m) t μ μ μ tx, ty, t, θ H = = t x φ φ φ t t t t μ μ μ tx, ty, t, θ Jacobian of h wrt location 3: 2 σ 0 r Q = t 2 0 σ r 4: S = H Σ H + Q Predicted measurement covariance t t t t t T 1 5: K = Σ H S Kalman gain t t t t 6: μ = μ + K ( z zˆ ) t t t t t Updated mean 7: Σ = I K H Σ Updated covariance ( ) t t t t Basilio Bona June 2011 21 EKF prediction step different motion noise parameters Small rotation and translation noise High rotation noise High translation noise High rotation and translation noise Basilio Bona June 2011 22 11

EKF measurement prediction step innovation z zˆ t t innovation Basilio Bona June 2011 23 EKF correction step measurement uncertainty uncertainty in robot location Prediction uncertainty Basilio Bona June 2011 24 12

Estimation Sequence (1) Accurate detection sensor trajectories estimated from the motion control uncertainty ground truth trajectories before corrected trajectories after Basilio Bona June 2011 25 Estimation Sequence (2) Less accurate detection sensor Basilio Bona June 2011 26 13

Estimation with unknown correspondences When landmark correspondence is not certain (as it is often the case) one should adopt a strategy to determine the identity of the landmarks during localization The most simple and popular technique is the maximum likelihood correspondence One determines the most likely value of the correspondence variable c t and uses it as the true one This strategy is fragile when different landmarks have equally likely values How to reduce the risk Choose landmarks that are far apart, so not to be confused Make sure that the robot pose uncertainty remains small Basilio Bona June 2011 27 ML data association The maximum likelihood estimator (MLE) determines the correspondence c t that maximizes the likelihood cˆ = arg max p( z c, z, u, m) t t 1: t 1: t 1 1: t c t This expression is conditioned on all prior correspondences that are treated as always correct c1: t 1 When the number of landmarks is high, the number of possible correspondence grows too much and the problem becomes intractable A solution is to maximize each term singularly and then proceed as in cˆ = arg max p( z c, z, u, m) i i t t 1: t 1: t 1 1: t i ct i i Nz hμ c m HΣ H T + Q t t t t t t t i ct arg max ( ; (,, ), ) Basilio Bona June 2011 28 14

Multi hypothesis tracking filter (MHTF) Each color represents a different hypothesis Basilio Bona June 2011 29 Localization with MHTF Current belief is represented not one but multiple hypotheses Each hypothesis is tracked by a Kalman filter Additional problems: Data association: which observation corresponds to which hypothesis? Hypothesis management: when it is necessary to add or delete hypotheses? Large body of literature on target tracking, motion correspondence etc. Basilio Bona June 2011 30 15

MHTF example The blue robot sees a door, but cannot resolve withcertaintyits its position since also the other hypotheses (in red) are likely Basilio Bona June 2011 31 MHT (1) Hypotheses are extracted from laser range finder (LRF) scans H = { x ˆ, Σ, PH ( )} i i i i Each hypothesis is computed by pose estimate, covariance matrix and probability of being the correct one: Hypothesis probability is computed using Bayes rule sensor report Pr ( H) PH ( ) t i i PH ( r) = i t Pr () Hypotheses with low probability bbl are dl deletedd New candidates are extracted from LRF scans t C = {, z R} j j j Basilio Bona June 2011 32 16

MHT (2) Basilio Bona June 2011 33 MHT (3) Basilio Bona June 2011 34 17

MHT: Implemented System (3) Example run # hypotheses P(H best ) Map and trajectory #hypotheses vs. time Basilio Bona June 2011 35 UKF localization The UKF localization algorithm uses the unscented transform to linearize i the motion and the measurement models Instead of computing Jacobians of these models it computes Gaussians using sigma points and transforms them through the model The landmark association (correspondence) is certain Basilio Bona June 2011 36 18

UKF Algorithm part a) Sigma points Basilio Bona June 2011 37 UKF Algorithm part b) Cross covariance Basilio Bona June 2011 38 19

UKF_localization (,, u, z, m) μt Σ 1 t 1 t t 2 ( α v 1 2 ) 0 t + α ωt M = t 2 0 ( α v α ω Motion noise + 3 t 4 t ) 2 σ 0 r Q = Measurement noise t 2 0 σ r T T a T μ = μ 1 1 ( 00) ( 00 t t ) Σ 0 0 t 1 a Σ = 0 M 0 t 1 t Augmented covariance 0 0 Qt ( ) χ = μ μ + γ Σ μ γ Σ χ a a a a a a t 1 t 1 t 1 t 1 t 1 t 1 x t = g( χ) Prediction of sigma points PREDICTION Augmented state mean Sigma points 2L i x μ = t w χ m i, t Predicted mean i= 0 2L T i x x Σ = w t c ( χ μ i, t t) ( χ μ i, t t) Predicted covariance i= 0 Basilio Bona June 2011 39 x ( ) Ζ = h χ + χ UKF_localization ( (,,, u, u, z,, zm, m) ) z t t t μ μ t Σ 1t Σ 1 t 1t 1 t t t t CORRECTION 1) Measurement sigma points 2L i zˆ = t w Ζ m i, t Predicted measurement mean i= 0 ( Ζ ˆ)( Ζ ˆ) 2L T i = t c i, t t i, t t i= 0 S w z z ( )( zˆ,, ) 2L T xz, i x = t w c i t t i t t i= 0 Σ χ μ Ζ Pred. measurement covariance Cross covariance K = Σ S xz, 1 t t t Kalman gain μ = μ + K ( z zˆ ) t t t t t Σ = Σ KSK T t t t t t Updated mean Updated covariance Basilio Bona June 2011 40 20

zˆ t 2 2 ( m μ, ) + ( m μ x t x y t, y) = Predicted measurement mean atan 2 ( m μ, m μ y t, y x t, x) μ t, θ r r r t t t h( μ, m) t μ μ μ tx, ty, t, θ H = = t x φ φ φ Jacobian of h w.r.t. location t t t t μ μ μ tx, ty, t, θ 2 σ 0 r Q = t 2 0 σ r S = H Σ H + Q T t t t t t K μ = μ + K ( z zˆ ) Σ = Σ H S T 1 t t t t t t t t t ( I K H ) Σt = t t t Pred. measurement covariance Kalman gain Updated mean Updated covariance CORRECTION 2) Basilio Bona June 2011 41 UKF Prediction Step 3) High noise in rotation small in translation 1) Small noise in translation and rotation Initial estimate 2) High noise in translation small in rotation 4) High noise in translation and in rotation Basilio Bona June 2011 42 21

UKF Observation Prediction Step The left plots show the sigma points predicted from two motion updates along with the resulting uncertainty ellipses. The true robot and the observation are indicated by the white circle and the bold line, respectively The right plots show the resulting li measurement prediction sigma points. The white arrows indicate the innovations, the differences between observed and predicted measurements Basilio Bona June 2011 43 UKF Correction Step The panels on the left show the measurement prediction The panels on the right the resulting corrections, which update the mean estimate and reduce the position uncertainty ellipses Basilio Bona June 2011 44 22

EKF Correction Step Basilio Bona June 2011 45 Estimation Sequence Robot trajectory according to the motion control (dashed lines) and the resulting true trajectory (solid lines) Landmark detections are indicated by thin lines EKF PF UKF Basilio Bona June 2011 46 23

Prediction Quality EKF prediction UKF prediction Approximation error due to linearization The robot moves on a circle The reference covariances are extracted from an accurate, sample based prediction Basilio Bona June 2011 47 Mobile robot localization Grid and Monte Carlo methods These algorithms can process raw sensor measurement, i.e., there is no need to extract features from measurements These methods are non parametric, e.g., they are not limited to unimodal distributions as with the EKF localization method They can solve global localization problem and kidnapped robot problems (in some cases); the EKF algorithm is not able to solve such problems The first method is called grid localization The second method is called Monte Carlo localization Basilio Bona June 2011 48 24

Grid localization Grid localization uses a histogram filter to represent posterior belief The grid dimension is a key factor for the performances of the method When the cell grid dimension is small, the algorithm can be extremely slow When the cell grid dimension is large, there can be an information loss that makes the algorithm not working in some cases Basilio Bona June 2011 49 Example Measurement model Correction/update step Motion model (prediction) Measurement model Correction/update step Motion model (prediction) Basilio Bona June 2011 50 25

Topological grid map A topological map is a graph annotation of the environment Topological maps assign nodes to particular places and edges as paths if direct passage between pairs of places (end nodes) exist Humans manage spatial knowledge primarily by topological information This information is used to construct a hierarchical topological map that describes the environment Basilio Bona June 2011 51 Topological grid map Topological grid map representation: Coarse gridding Cells of varying size Resolution influenced by the structure of the environment (significant places and landmarks as doors, corridors windows, T junctions, dead ends, act as grid elements) Grid elements Basilio Bona June 2011 52 26

Metric grid map Metric grid representation: Finer gridding Cells of uniform size Resolution not influenced by the structure of the environment Usually cell sizes of 15 cm x 15 cm up to 1 m x 1m Motion model affected by robot velocity and cell size θ Basilio Bona June 2011 53 Cell size influence Cell size influence the performance of the grid localization algorithm Average localization error as a function of grid cell size, for ultrasound sensors and laser range finders Average CPU time needed for global localization as a function of grid resolution, shown for both ultrasound sensors and laser range finders Basilio Bona June 2011 54 27

Example with metric grids 1 Basilio Bona June 2011 55 Example with metric grids 2 Basilio Bona June 2011 56 28

Example with metric grids 3 Basilio Bona June 2011 57 Example with sonar data 1 Basilio Bona June 2011 58 29

Example with sonar data 2 Basilio Bona June 2011 59 Example with sonar data 3 Basilio Bona June 2011 60 30

Example with sonar data 4 Basilio Bona June 2011 61 Example with sonar data 5 Basilio Bona June 2011 62 31

Monte Carlo localization Monte Carlo localization (MCL) is a relatively new, yet very popular algorithm It is a versatile method, where the belief is represented by a set of particles (i.e., a particle filter) ) It can be used both for local and for global localization problems In the context of localization, the particles are propagated according to the motion model They are then weighted according to the likelihood of the observations In a re sampling step, new particles il are drawn with iha probability proportional to the likelihood of the observation The method first appeared in 70 s, and was re discovered by Kitagawa and Isard & Blake in computer vision Basilio Bona June 2011 63 Particle filter localization (MCL) Particle filters based localization (Monte Carlo Localization MCL) uses a set of weighted random samples to approximate the robot pose belief Particle set size n ( ) [] i ( ˆ [] i t ω δ t t t ) bel p p p i= 1 n with i= 1 ω [] i t = 1 Pose of the particle Weight of the particle Basilio Bona June 2011 64 32

Particle filter localization (MCL) Particle based Representation of position belief Particle based approximation Gaussian approximation (ellipse: 95% acceptance region) Basilio Bona June 2011 65 Particle filter localization (MCL) Using particle filters approximation, the Bayes Filter can be reformulated as follows (starting from the robot initial belief at time zero) 1. PREDICTION: Generate a new set of particles given the motion model and the applied controls 2. UPDATE: Assign to each particle an importance weight according to the sensor measurements 3. RESAMPLING: Randomly resample particles in function of their importance weight Basilio Bona June 2011 66 33

Prediction Prediction Generate a new set of particles given the motion model and the applied controls For each particle: Given the particle pose at time step t-1 and the commands, the particle pose at time t is predicted using the motion model [] i [] i pˆ = f pˆ, u, ω ( 1 ) t t t t The variable is randomly extracted according to the noise distribution We obtain a set of predicted particles Basilio Bona June 2011 67 Update Update Assign to each particle an importance weight according to the sensor measurements For each particle: Compare particle s prediction of measurements with actual [ i ] measurements z vs h( pˆ ) t t Particles whose predictions match the measurements are given a high weight In red the particles with high weights Basilio Bona June 2011 68 34

Resampling Resampling Randomly resample particles in function of their importance weight For each particle: For n times draw (with replacement) a particle from Γ t with probability given by the importance weights and put it in the set Γ t Particles whose predictions match the measurements are given a high weight The new set Γ t provides the particle based approximation of the robot pose at time t Basilio Bona June 2011 69 Example Measurement model Correction/update step Motion model (prediction) Measurement model Correction/update step Motion model (prediction) Basilio Bona June 2011 70 35

Example Basilio Bona June 2011 71 MC_localization ( χ,,, ) t u z m 1 t t 1: χ t = χ = 2: for m = 1to M do t 3: x = sample_motion_model ( u, x ) [ m] t t [ m] t 1 [ m] [ m] = measurement_model t t t [ m] [ m] = χ + x w t t t t 4: w ( z, x, m) 5: χ, 6: endfor 7: for m = 1to M do 8: 9: 10: endfor draw i with probability add x [] i t to χ t w [] i t 11 : return χt see previous slides Basilio Bona June 2011 72 36

Monte Carlo Localization within a sensor infrastructure Fixed sensors deployed in known positions in the environment STEP 1: Basilio Bona June 2011 73 STEP 1: Basilio Bona June 2011 74 37

STEP 1: Acquire meas. Basilio Bona June 2011 75 STEP 1: Acquire meas. Weights Update Basilio Bona June 2011 76 38

STEP 1: Acquire meas. Weights Update Resampling Basilio Bona June 2011 77 STEP 1: Acquire meas. Weights Update Resampling STEP 1: Basilio Bona June 2011 78 39

STEP 1: Acquire meas. Weights Update Resampling STEP 1: Basilio Bona June 2011 79 STEP 1: Acquire meas. Weights Update Resampling STEP 2: Acquire meas. Basilio Bona June 2011 80 40

STEP 1: Acquire meas. Weights Update Resampling STEP 2: Acquire meas. Weights Update Basilio Bona June 2011 81 STEP 1: Acquire meas. Weights Update Resampling STEP 2: Acquire meas. Weights Update Resampling Basilio Bona June 2011 82 41

STEP 1: Acquire meas. Weights Update Resampling STEP 2: Acquire meas. Weights Update Resampling STEP 3: Basilio Bona June 2011 83 STEP 1: Acquire meas. Weights Update Resampling STEP 2: Acquire meas. Weights Update Resampling STEP 3: Basilio Bona June 2011 84 42

STEP 1: Acquire meas. Weights Update Resampling STEP 2: Acquire meas. Weights Update Resampling STEP 3: Acquire meas. Basilio Bona June 2011 85 STEP 1: Acquire meas. Weights Update Resampling STEP 2: Acquire meas. Weights Update Resampling STEP 3: Acquire meas. Weights Update Basilio Bona June 2011 86 43

STEP 1: Acquire meas. Weights Update Resampling STEP 2: Acquire meas. Weights Update Resampling STEP 3: Acquire meas. Weights Update Resampling Basilio Bona June 2011 87 Example for landmark based localization Basilio Bona June 2011 88 44

Properties of MCL MCL can approximate any distribution, as it can represent complex multi modal distributions and blend them with Gaussian style distributions Increasing the total number of particles increases the accuracy of the approximation The number of particles M is a trade off parameter between accuracy and necessary computational resources Particles number shall remain large enough to avoid filter divergence The particles number may remain fixed or change adaptively Basilio Bona June 2011 89 Adapting the particle size In the example below, the number of particles is very high ( 100.000) to allow an accurate representation of the belief during early stages of the algorithm, but is unnecessarily high in later stages, when the belief concentrates in smaller regions an adaptive strategy is required Basilio Bona June 2011 90 45

KLD sampling Kullback Leibler divergence (KLD) sampling is a variant of MCL that adapts the number of particles over time KLD (also known as information divergence, information gain, relative entropy)is a measure of the difference between two probability distributions For probability distributions P and Q of adiscrete random variable their KLD is defined as For distributions P and Q of a continuous random variable, the KLD is defined as an integral Basilio Bona June 2011 91 KLD sampling The idea is to set adaptively the number of particles based on a statistical bound on the sample based approximation quality At each iteration of the PF, KLD sampling determines the number of samples such that, with probability 1 d, the error between the true posterior and the sample based approximation is less than ε To derive this bound, we assume that the true posterior is given by a discrete, piecewise constant distribution such as a discrete density tree or a multi dimensional histogram Basilio Bona June 2011 92 46

KLD sampling Basilio Bona June 2011 93 Approximation formula n k 1 2 2 1 + z 2ε 9( k 1) 9( k 1) 1 δ 3 where z is the upper 1 δ quantile of the normal N(0,1) distribution 1 δ Basilio Bona June 2011 94 47

Dynamic environment Markov assumptions are good for a static environment Oftenthe the environment where robots operate is full of people moving randomly People dynamics is not modeled by the state x t Probabilistic approaches are robust since they incorporate sensor noise, but... sensor noise must be independent at each time step... while people dynamics is highly dependent in time Basilio Bona June 2011 95 Dynamic environment: which solution State Augmentation include the hidden state into the state estimated by the filter it is more general, but suffer from high computational complexity: the pose of each subject moving around the robot must be estimated, and the number of states varies in time Outlier rejection pre process sensor measurements to eliminate measurements affected by hidden state it may work well when the people presence affects the sensors (laser range finder, and, to lesser extent, vision sensors) reading Basilio Bona June 2011 96 48

Outlier rejection The Expectation maximization (EM) learning algorithm is used for outlier detection and rejection It comes from the beam model of range finders, where z unexp and p unexp parameters relates to unexpectedobjects we have to introduce and compute a correspondence variable that can take one of four values {hit, short, max, rand} the desired probability is computed as this integral does not have a closed form solution k p ( z x, m) z bel( x )dx k k unexp t t unexp t t pc ( = unexp z, z, u, m) = t t 1: t 1 1: t k p ( z x, m) z bel( x )dx c t t c t t c approximation with a representative sample of the posterior k c t the measurement is rejected if the probability exceeds a given threshold Basilio Bona June 2011 97 Comparison of different ML implementations EKF MHT Topological grid Metric grid MCL Measurements landmarks landmarks landmarks Measurement noise raw measurements raw measurements Gaussian Gaussian any any any Posterior Gaussian mixture of Gaussians histogram histogram particles Efficiency (mem) ++ ++ + + Efficiency (time) ++ ++ + + Ease of implementation i + + ++ Resolution ++ ++ + + Robustness + + ++ ++ Global localization no no yes yes yes Basilio Bona June 2011 98 49