Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Similar documents
L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

Autonomous Mobile Robot Design

Simultaneous Localization and Mapping

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Mobile Robot Localization

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras

Lecture 7: Optimal Smoothing

2D Image Processing (Extended) Kalman and particle filter

Mobile Robot Localization

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

Probabilistic Fundamentals in Robotics

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Lecture 6: Bayesian Inference in SDE Models

Machine Learning. Probability Basics. Marc Toussaint University of Stuttgart Summer 2014

The Kalman Filter ImPr Talk

Probabilistic Robotics

Robotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Robot Localisation. Henrik I. Christensen. January 12, 2007

From Bayes to Extended Kalman Filter

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Probabilistic Graphical Models

Part 1: Expectation Propagation

Particle Filtering for Data-Driven Simulation and Optimization

Sequential Monte Carlo Methods for Bayesian Computation

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Chapter 9: Bayesian Methods. CS 336/436 Gregory D. Hager

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

Robot Localization and Kalman Filters

Information Gain-based Exploration Using Rao-Blackwellized Particle Filters

CS 188: Artificial Intelligence

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Markov Chain Monte Carlo Methods for Stochastic Optimization

SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters

Particle Filtering Approaches for Dynamic Stochastic Optimization

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Recursive Bayes Filtering

CSE 473: Artificial Intelligence

Artificial Intelligence

Machine Learning 4771

CS491/691: Introduction to Aerial Robotics

2D Image Processing. Bayes filter implementation: Kalman filter

Human Pose Tracking I: Basics. David Fleet University of Toronto

Particle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007

Linear Dynamical Systems

Markov Chain Monte Carlo Methods for Stochastic

AUTOMOTIVE ENVIRONMENT SENSORS

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

TSRT14: Sensor Fusion Lecture 8

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

E190Q Lecture 10 Autonomous Robot Navigation

2D Image Processing. Bayes filter implementation: Kalman filter

Self Adaptive Particle Filter

Partially Observable Markov Decision Processes (POMDPs)

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Introduction to Mobile Robotics Probabilistic Robotics

1 Kalman Filter Introduction

Using the Kalman Filter for SLAM AIMS 2015

Rao-Blackwellized Particle Filter for Multiple Target Tracking

Sensor Fusion: Particle Filter

Sensor Tasking and Control

Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors

Bayesian Methods / G.D. Hager S. Leonard

Artificial Intelligence

Markov chain Monte Carlo methods for visual tracking

Graphical Models and Kernel Methods

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Expectation propagation for signal detection in flat-fading channels

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles

CS 188: Artificial Intelligence Spring Announcements

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

(W: 12:05-1:50, 50-N201)

CSEP 573: Artificial Intelligence

An Brief Overview of Particle Filtering

Data Fusion Kalman Filtering Self Localization

Autonomous Mobile Robot Design

Approximate Inference

Square Root Unscented Particle Filtering for Grid Mapping

Particle lter for mobile robot tracking and localisation

Introduction to Mobile Robotics Bayes Filter Kalman Filter

Hidden Markov Models (recap BNs)

Results: MCMC Dancers, q=10, n=500

Answers and expectations

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Cooperative relative robot localization with audible acoustic sensing

Transcription:

Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart

DARPA Grand Challenge 2005 http://www.darpa.mil/grandchallenge05/ 2/48

DARPA Grand Urban Challenge 2007 http://www.darpa.mil/grandchallenge/index.asp 3/48

http://www.slawomir.de/publications/grzonka09icra/grzonka09icra.pdf Quadcopter Indoor Localization 4/48

STAIR: STanford Artificial Intelligence Robot http://stair.stanford.edu/multimedia.php 5/48

Types of Robot Mobility 6/48

Types of Robot Mobility 7/48

Each type of robot mobility corresponds to a system equation x t+1 = x t + τf(x t, u t ) or, if the dynamics are stochastic, P (x t+1 u t, x t ) = N(x t+1 x t + τf(x t, u t ), Σ) We considered control, path finding, and trajectory optimization For this we always assumed to know the state x t of the robot (e.g., its posture/position)! 8/48

Outline PART I: A core challenge in mobile robotics is state estimation Bayesian filtering & smoothing particles, Kalman PART II: Another challenge is to build a map while exploring SLAM (simultaneous localization and mapping) 9/48

PART I: State Estimation Problem Our sensory data does not provide sufficient information to determine our location. Given the local sensor readings y t, the current state x t (location, position) is uncertain. which hallway? which door exactly? which heading direction? 10/48

State Estimation Problem What is the probability of being in front of room 154, given we see what is shown in the image? What is the probability given that we were just in front of room 156? What is the probability given that we were in front of room 156 and moved 15 meters? 11/48

Recall Bayes theorem P (X Y ) = P (Y X) P (X) P (Y ) posterior = likelihood prior (normalization) 12/48

How can we apply this to the State Estimation Problem? 13/48

How can we apply this to the State Estimation Problem? Using Bayes Rule: P (sensor location)p (location) P (location sensor) = P (sensor) 13/48

Bayes Filter x t = state (location) at time t y t = sensor readings at time t u t-1 = control command (action, steering, velocity) at time t-1 Given the history y 0:t and u 0:t-1, we want to compute the probability distribution over the state at time t p t (x t ) := P (x t y 0:t, u 0:t-1 ) Generally: Filtering: P (x t y 0:t) Smoothing: P (x t y 0:T ) y 0:t x t 000000000 111111111 y 0:T x t 000000000000000 111111111111111 Prediction: P (x t y 0:s) y 0:s 00000 11111 x t 14/48

Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) 15/48

Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) = c t P (y t x t, y 0:t-1, u 0:t-1 ) P (x t y 0:t-1, u 0:t-1 ) using Bayes rule P (X Y, Z) = c P (Y X, Z) P (X Z) with some normalization constant c t 15/48

Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) = c t P (y t x t, y 0:t-1, u 0:t-1 ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t y 0:t-1, u 0:t-1 ) uses conditional independence of the observation on past observations and controls 15/48

Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) = c t P (y t x t, y 0:t-1, u 0:t-1 ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t, x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 by definition of the marginal 15/48

Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) = c t P (y t x t, y 0:t-1, u 0:t-1 ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t, x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 = c t P (y t x t ) P (x t x t-1, y 0:t-1, u 0:t-1 ) P (x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 by definition of a conditional 15/48

Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) = c t P (y t x t, y 0:t-1, u 0:t-1 ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t, x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 = c t P (y t x t ) P (x t x t-1, y 0:t-1, u 0:t-1 ) P (x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 = c t P (y t x t ) P (x t x t-1, u t-1 ) P (x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 given x t-1, x t depends only on the controls u t-1 (Markov Property) 15/48

Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) = c t P (y t x t, y 0:t-1, u 0:t-1 ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t, x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 = c t P (y t x t ) P (x t x t-1, y 0:t-1, u 0:t-1 ) P (x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 = c t P (y t x t ) P (x t x t-1, u t-1 ) P (x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 = c t P (y t x t ) P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx t-1 x t-1 A Bayes filter updates the posterior belief p t (x t ) in each time step using the: observation model P (y t x t ) transition model P (x t u t-1, x t-1 ) 15/48

Bayes Filter p t (x t ) P (y t x t ) }{{} new information P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx x t-1 }{{} t-1 old estimate }{{} predictive estimate ˆp t(x t) 1. We have a belief p t-1 (x t-1 ) of our previous position 2. We use the motion model to predict the current position ˆp t (x t ) x t-1 P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx t-1 3. We integetrate this with the current observation to get a better belief p t (x t ) P (y t x t ) ˆp t (x t ) 16/48

Typical transition model P (x t u t-1, x t-1 ) in robotics: (from Robust Monte Carlo localization for mobile robots Sebastian Thrun, Dieter Fox, Wolfram Burgard, Frank Dellaert) 17/48

Odometry ( Dead Reckoning ) The predictive distributions ˆp t (x t ) without integrating observations (removing the P (y t x t ) part from the Bayesian filter) (from Robust Monte Carlo localization for mobile robots Sebastian Thrun, Dieter Fox, Wolfram Burgard, Frank Dellaert) 18/48

Again, predictive distributions ˆp t (x t ) without integrating landmark observations 19/48

The Bayes-filtered distributions p t (x t ) integrating landmark observations 20/48

Bayesian Filters How to represent the belief p t (x t ): Gaussian Particles 21/48

Recall: Particle Representation of a Distribution Weighed set of N particles {(x i, w i )} N i=1 p(x) q(x) := N i=1 wi δ(x, x i ) 22/48

Particle Filter := Bayesian Filtering with Particles (Bayes Filter: p t (x t ) P (y t x t ) x t-1 P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx t-1 ) 1. Start with N particles {(x i t-1, w i t-1)} N i=1 2. Resample particles to get N weight-1-particles: {ˆx i t-1} N i=1 3. Use motion model to get new predictive particles {x i t} N i=1 each x i t P (x t u t-1, ˆx i t-1) 4. Use observation model to assign new weights w i t P (y t x i t) 23/48

Particle Filter aka Monte Carlo Localization in the mobile robotics community Condensation Algorithm in the vision community Efficient resampling is important: Typically Residual Resampling : Instead of sampling directly ˆn i Multi({Nw i}) set ˆn i = Nw i + n i with n i Multi({Nw i Nw i }) Liu & Chen (1998): Sequential Monte Carlo Methods for Dynamic Systems. Douc, Cappé & Moulines: Comparison of Resampling Schemes for Particle Filtering. 24/48

Example: Quadcopter Localization http://www.slawomir.de/publications/grzonka09icra/grzonka09icra.pdf Quadcopter Indoor Localization 25/48

Typical Pitfall in Particle Filtering Predicted particles {x i t} N i=1 have very low observation likelihood P (y t x i t) 0 ( particles die over time ) Classical solution: generate particles also with other than purely forward proposal P (x t u t-1, x t-1 ): Choose a proposal that depends on the new observation y t, ideally approximating P (x t y t, u t-1, x t-1 ) Or mix particles sampled directly from P (y t x t ) and from P (x t u t-1, x t-1 ). (Robust Monte Carlo localization for mobile robots. Sebastian Thrun, Dieter Fox, Wolfram Burgard, Frank Dellaert) 26/48

Kalman filter := Bayesian Filtering with Gaussians Bayes Filter: p t (x t ) P (y t x t ) x t-1 P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx t-1 Can be computed analytically for linear-gaussian observations and transitions: P (y t x t ) = N(y t Cx t + c, W ) P (x t u t-1, x t-1 ) = N(x t A(u t-1 ) x t-1 + a(u t-1 ), Q) Defition: N(x a, A) = 1 2πA 1/2 exp{ 1 2 (x - a) A -1 (x - a)} Product: N(x a, A) N(x b, B) = N(x B(A+B) -1 a + A(A+B) -1 b, A(A+B) -1 B) N(a b, A + B) Propagation : y N(x a + F y, A) N(y b, B) dy = N(x a + F b, A + F BF ) Transformation: N(F x + f a, A) = 1 F N(x F -1 (a f), F -1 AF - ) (more identities: see Gaussian identities http://ipvs.informatik.uni-stuttgart.de/mlr/marc/notes/gaussians.pdf) 27/48

Kalman filter derivation p t(x t) = N(x t s t, S t) P (y t x t) = N(y t Cx t + c, W ) P (x t u t-1, x t-1 ) = N(x t Ax t-1 + a, Q) p t(x t) P (y t x t) P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx t-1 x t-1 = N(y t Cx t + c, W ) N(x t Ax t-1 + a, Q) N(x t-1 s t-1, S t-1 ) dx t-1 x t-1 = N(y t Cx t + c, W ) N(x t As t-1 + a, Q + AS t-1a ) }{{}}{{} =:ŝ t =:Ŝt = N(Cx t + c y t, W ) N(x t ŝ t, Ŝt) = N[x t C W -1 (y t c), C W -1 C] N(x t ŝ t, Ŝt) = N(x t s t, S t) terms indep. of x t S t = (C W -1 C + Ŝ-1 t ) -1 = Ŝt ŜtC (W + CŜtC ) -1 }{{} CŜt Kalman gain K s t = S t[c W -1 (y t c) + Ŝ-1 t ŝt] = ŝt + K(yt Cŝt c) The second to last line uses the general Woodbury identity. The last line uses S tc W -1 = K and S t Ŝ -1 t = I KC 28/48

Extended Kalman filter (EKF) and Unscented Transform Bayes Filter: p t (x t ) P (y t x t ) x t-1 P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx t-1 Can be computed analytically for linear-gaussian observations and transitions: P (y t x t ) = N(y t Cx t + c, W ) P (x t u t-1, x t-1 ) = N(x t A(u t-1 )x t-1 + a(u t-1 ), Q) If P (y t x t ) or P (x t u t-1, x t-1 ) are not linear: P (y t x t ) = N(y t g(x t ), W ) P (x t u t-1, x t-1 ) = N(x t f(x t-1, u t-1 ), Q) approximate f and g as locally linear (Extended Kalman Filter) or sample locally from them and reapproximate as Gaussian (Unscented Transform) 29/48

Bayes smoothing Filtering: P (x t y 0:t ) Smoothing: P (x t y 0:T ) y 0:t x t 000000000 111111111 y 0:T x t 000000000000000 111111111111111 Prediction: P (x t y 0:s ) y 0:s 00000 11111 x t 30/48

Bayes smoothing Let P = y 0:t past observations, F = y t+1:t future observations P (x t P, F, u 0:T ) P (F x t, P, u 0:T ) P (x t P, u 0:T ) = P (F x t, u t:t ) }{{} =:β t(x t) P (x t P, u 0:t-1 ) }{{} =:p(x t) Bayesian smoothing fuses a forward filter p t (x t ) with a backward filter β t (x t ) 31/48

Bayes smoothing Let P = y 0:t past observations, F = y t+1:t future observations P (x t P, F, u 0:T ) P (F x t, P, u 0:T ) P (x t P, u 0:T ) = P (F x t, u t:t ) }{{} =:β t(x t) P (x t P, u 0:t-1 ) }{{} =:p(x t) Bayesian smoothing fuses a forward filter p t (x t ) with a backward filter β t (x t ) Backward recursion (derivation analogous to the Bayesian filter) β t (x t ) := P (y t+1:t x t, u t:t ) = β t+1 (x t+1 ) P (y t+1 x t+1 ) P (x t+1 x t, u t ) dx t+1 x t+1 31/48

PART II: Localization and Mapping The Bayesian filter requires an observation model P (y t x t ) A map is something that provides the observation model: A map tells us for each x t what the sensor readings y t might look like 32/48

Types of maps Grid map Landmark map K. Murphy (1999): Bayesian map learning in dynamic environments. Grisetti, Tipaldi, Stachniss, Burgard, Nardi: Fast and Accurate SLAM with Rao-Blackwellized Particle Filters Victoria Park data set Laser scan map M. Montemerlo, S. Thrun, D. Koller, & B. Wegbreit (2003): FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. IJCAI, 1151 1156. 33/48

Simultaneous Localization and Mapping Problem Notation: x t = state (location) at time t y t = sensor readings at time t u t-1 = control command (action, steering, velocity) at time t-1 m = the map Given the history y 0:t and u 0:t-1, we want to compute the belief over the pose AND THE MAP m at time t p t (x t, m) := P (x t, m y 0:t, u 0:t-1 ) We assume to know: transition model P (x t u t-1, x t-1 ) observation model P (y t x t, m) 34/48

SLAM: classical chicken or egg problem If we knew the state trajectory x 0:t we could efficiently compute the belief over the map P (m x 0:t, y 0:t, u 0:t-1 ) If we knew the map we could use a Bayes filter to compute the belief over the state P (x t m, y 0:t, u 0:t-1 ) 35/48

SLAM: classical chicken or egg problem If we knew the state trajectory x 0:t we could efficiently compute the belief over the map P (m x 0:t, y 0:t, u 0:t-1 ) If we knew the map we could use a Bayes filter to compute the belief over the state P (x t m, y 0:t, u 0:t-1 ) SLAM requires to tie state estimation and map building together: 1) Joint inference on x t and m ( Kalman-SLAM) 2) Tie a state hypothesis (=particle) to a map hypothesis ( particle SLAM) 3) Frame everything as a graph optimization problem ( graph SLAM) 35/48

Joint Bayesian Filter over x and m A (formally) straight-forward approach is the joint Bayesian filter p t (x t, m) P (y t x t, m) x t-1 P (x t u t-1, x t-1 ) p t-1 (x t-1, m) dx t-1 But: How represent a belief over high-dimensional x t, m? 36/48

Map uncertainty In the case the map m = (θ 1,.., θ N ) is a set of N landmarks, θ j R 2 Use Gaussians to represent the uncertainty of landmark positions 37/48

(Extended) Kalman Filter SLAM Analogous to Localization with Gaussian for the pose belief p t (x t ) But now: joint belief p t (x t, θ 1:N ) is 3 + 2N-dimensional Gaussian Assumes the map m = (θ 1,.., θ N ) is a set of N landmarks, θ j R 2 Exact update equations (under the Gaussian assumption) Conceptually very simple Drawbacks: Scaling (full covariance matrix is O(N 2 )) Sometimes non-robust (uni-modal, data association problem ) Lacks advantages of Particle Filter (multiple hypothesis, more robust to non-linearities) 38/48

SLAM with particles Core idea: Each particle carries its own map belief 39/48

SLAM with particles Core idea: Each particle carries its own map belief Use a conditional representation p t (x t, m) = p t (x t ) p t (m x t ) (This notation is flaky... the below is more precise) As for the Localization Problem use particles to represent the pose belief p t (x t ) Note: Each particle actually has a history x i 0:t a whole trajectory! For each particle separately, estimate the map belief p i t(m) conditioned on the particle history x i 0:t. The conditional beliefs p i t(m) may be factorized over grid points or landmarks of the map K. Murphy (1999): Bayesian map learning in dynamic environments. http://www.cs.ubc.ca/~murphyk/papers/map_nips99.pdf 39/48

Map estimation for a given particle history Given x 0:t (e.g. a trajectory of a particle), what is the posterior over the map m? simplified Bayes Filter: p t (m) := P (m x 0:t, y 0:t ) P (y t m, x t ) p t-1 (m) (no transtion model: assumption that map is constant) In the case of landmarks (FastSLAM): m = (θ 1,.., θ N ) θ j = position of the jth landmark, j {1,.., N} n t = which landmark we observe at time t, n t {1,.., N} We can use a separate (Gaussian) Bayes Filter for each θ j conditioned on x 0:t, each θ j is independent from each θ k : P (θ 1:N x 0:t, y 0:n, n 0:t ) = j P (θ j x 0:t, y 0:n, n 0:t ) 40/48

Particle likelihood in SLAM Particle likelihood for Localization Problem: w i t = P (y t x i t) (determins the new importance weight w i t In SLAM the map is uncertain each particle is weighted with the expected likelihood: w i t = P (y t x i t, m) p t 1 (m) dm In case of landmarks (FastSLAM): w i t = P (y t x i t, θ nt, n t ) p t 1 (θ nt ) dθ nt Data association problem (actually we don t know n t ): For each particle separately choose n i t = argmax nt w i t(n t ) 41/48

Particle-based SLAM summary We have a set of N particles {(x i, w i )} N i=1 to represent the pose belief p t (x t ) For each particle we have a separate map belief p i t(m); in the case of landmarks, this factorizes in N separate 2D-Gaussians Iterate 1. Resample particles to get N weight-1-particles: {ˆx i t-1} N i=1 2. Use motion model to get new predictive particles {x i t} N i=1 3. Update the map belief p i m(m) P (y t m, x t ) p i t-1(m) for each particle 4. Compute new importance weights w i t P (y t x i t, m) p t 1 (m) dm using the observation model and the map belief 42/48

Demo: Visual SLAM Map building from a freely moving camera 43/48

Demo: Visual SLAM Map building from a freely moving camera SLAM has become a bit topic in the vision community.. features are typically landmarks θ 1:N with SURF/SIFT features PTAM (Parallel Tracking and Mapping) parallelizes computations... PTAM1 PTAM2 TODO: 11-DTAM-Davidson G Klein, D Murray: Parallel Tracking and Mapping for Small AR Workspaces http://www.robots.ox.ac.uk/~gk/ptam/ 44/48

Alternative SLAM approach: Graph-based Represent the previous trajectory as a graph nodes = estimated positions & observations edges = transition & step estimation based on scan matching Loop Closing: check if some nodes might coincide new edges Classical Optimization: The whole graph defines an optimization problem: Find poses that minimize sum of edge & node errors 45/48

Loop Closing Problem (Doesn t explicitly exist in Particle Filter methods: If particles cover the belief, then data association solves the loop closing problem ) 46/48

Graph-based SLAM Life-long Map Learning for Graph-based SLAM Approaches in Static Environments Kretzschmar, Grisetti, Stachniss 47/48

SLAM code Graph-based and grid map methods: http://openslam.org/ Visual SLAM e.g. http://ewokrampage.wordpress.com/ 48/48