Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Similar documents
Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Par$cle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, ProbabilisAc RoboAcs

Sequential Importance Resampling (SIR) Particle Filter

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Autonomous Mobile Robot Design

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

CSE 473: Artificial Intelligence

Sensor Fusion: Particle Filter

2D Image Processing (Extended) Kalman and particle filter

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

AUTOMOTIVE ENVIRONMENT SENSORS

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Maximum Likelihood (ML), Expectation Maximization (EM) Pieter Abbeel UC Berkeley EECS

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Bayesian Methods / G.D. Hager S. Leonard

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras

Robotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Probabilistic Robotics

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Mobile Robot Localization

Mobile Robot Localization

Probabilistic Fundamentals in Robotics

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Introduction to Mobile Robotics Bayes Filter Kalman Filter

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.

Sampling Based Filters

Gaussians. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

CS 188: Artificial Intelligence Spring Announcements

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Particle Filters. Outline

TSRT14: Sensor Fusion Lecture 8

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

Introduction to Machine Learning

Robert Collins CSE586, PSU Intro to Sampling Methods

Nonlinear Optimization for Optimal Control Part 2. Pieter Abbeel UC Berkeley EECS. From linear to nonlinear Model-predictive control (MPC) POMDPs

CSEP 573: Artificial Intelligence

Recursive Bayes Filtering

Robert Collins CSE586, PSU Intro to Sampling Methods

CS491/691: Introduction to Aerial Robotics

CSE 473: Ar+ficial Intelligence. Example. Par+cle Filters for HMMs. An HMM is defined by: Ini+al distribu+on: Transi+ons: Emissions:

Markov Models and Hidden Markov Models

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Sensor Tasking and Control

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

Sampling Based Filters

Autonomous Navigation for Flying Robots

CS 188: Artificial Intelligence

Answers and expectations

Bayes Nets: Sampling

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

2D Image Processing. Bayes filter implementation: Kalman filter

Approximate Inference

A Note on the Particle Filter with Posterior Gaussian Resampling

Self Adaptive Particle Filter

Lecture 2: From Linear Regression to Kalman Filter and Beyond

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence

Sequential Monte Carlo Methods for Bayesian Computation

2-Step Temporal Bayesian Networks (2TBN): Filtering, Smoothing, and Beyond Technical Report: TRCIM1030

Partially Observable Markov Decision Processes (POMDPs)

Probabilistic Graphical Models

Target Tracking and Classification using Collaborative Sensor Networks

State-Space Methods for Inferring Spike Trains from Calcium Imaging

Introduction to Mobile Robotics Probabilistic Robotics

From Bayes to Extended Kalman Filter

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

CSE 473: Artificial Intelligence Spring 2014

2D Image Processing. Bayes filter implementation: Kalman filter

Results: MCMC Dancers, q=10, n=500

E190Q Lecture 10 Autonomous Robot Navigation

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

CS 188: Artificial Intelligence Spring 2009

Simultaneous Localization and Mapping

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

CS 343: Artificial Intelligence

CSE 473: Ar+ficial Intelligence

Autonomous Mobile Robot Design

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Probabilistic Graphical Models

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Particle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007

Content.

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 8: Importance Sampling

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Artificial Intelligence

Sequential Bayesian Updating

Robotics 2 Target Tracking. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

Maximum Likelihood (ML), Expecta6on Maximiza6on (EM)

Transcription:

Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Motivation For continuous spaces: often no analytical formulas for Bayes filter updates Solution 1: Histogram Filters: (not studied in this lecture) Partition the state space Keep track of probability for each partition Challenges: What is the dynamics for the partitioned model? What is the measurement model? Often very fine resolution required to get reasonable results Solution 2: Particle Filters: Represent belief by random samples Can use actual dynamics and measurement models Naturally allocates computational resources where required (~ adaptive resolution) Aka Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter 2

Sample-based Localization (sonar)

Problem to be Solved Given a sample-based representation of Bel(x t ) = P(x t z 1,, z t, u 1,, u t ) S t = {x t 1, x t 2,..., x t N } Find a sample-based representation S t+1 = {x 1 t+1, x 2 t+1,..., x N t+1 } of Bel(x t+1 ) = P(x t+1 z 1,, z t, z t+1, u 1,, u t+1 )

Dynamics Update Given a sample-based representation S t = {x t 1, x t 2,..., x t N } of Bel(x t ) = P(x t z 1,, z t, u 1,, u t ) Find a sample-based representation of P(x t+1 z 1,, z t, u 1,, u t+1 ) Solution: For i=1, 2,, N Sample x i t+1 from P(X t+1 X t = x i t, u t+1 )

Observation update Given a sample-based representation of P(x t+1 z 1,, z t ) {x 1 t+1, x 2 t+1,..., x N t+1 } Find a sample-based representation of P(x t+1 z 1,, z t, z t+1 ) = C * P(x t+1 z 1,, z t ) * P(z t+1 x t+1 ) Solution: For i=1, 2,, N w (i) t+1 = w (i) t* P(z t+1 X t+1 = x (i) t+1) the distribution is represented by the weighted set of samples {< x 1 1 t+1, w t+1 >,< x 2 2 t+1, w t+1 >,...,< x N t+1, w N t+1 >}

Sequential Importance Sampling (SIS) Particle Filter Sample x 1 1, x 2 1,, x N 1 from P(X 1 ) Set w i 1= 1 for all i=1,,n For t=1, 2, Dynamics update: For i=1, 2,, N Sample x i t+1 from P(X t+1 X t = x i t, u t+1 ) Observation update: For i=1, 2,, N w i t+1 = w i t* P(z t+1 X t+1 = x i t+1) At any time t, the distribution is represented by the weighted set of samples { <x i t, w i t> ; i=1,,n}

SIS particle filter major issue The resulting samples are only weighted by the evidence The samples themselves are never affected by the evidence à Fails to concentrate particles/computation in the high probability areas of the distribution P(x t z 1,, z t )

Sequential Importance Resampling (SIR) At any time t, the distribution is represented by the weighted set of samples { <x i t, wi t > ; i=1,,n} à à Sample N times from the set of particles The probability of drawing each particle is given by its importance weight à More particles/computation focused on the parts of the state space with high probability mass

1. Algorithm particle_filter( S t-1, u t, z t ): 2. Sequential Importance Resampling (SIR) Particle Filter S t 3. For i =1 n Generate new samples 4. Sample index j(i) from the discrete distribution given by w t-1 i j ( i) 5. Sample xt from p(x t x t!1,u t ) using and i i 6. w = p( z x ) Compute importance weight i 7. η = η + w t Update normalization factor i i 8. S = S { < x, w > } Insert 9. For =, η = 0 i i 10. w t = wt /η Normalize weights 11. Return S t t t t i =1 n t t t t xt 1 u t

Particle Filters

Sensor Information: Importance Sampling

Robot Motion

Sensor Information: Importance Sampling

Robot Motion

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

Noise Dominated by Motion Model [Grisetti, Stachniss, Burgard, T-RO2006] à Most particles get (near) zero weights and are lost.

Importance Sampling Theoretical justification: for any function f we have: f could be: whether a grid cell is occupied or not, whether the position of a robot is within 5cm of some (x,y), etc.

Importance Sampling Task: sample from density p(.) Solution: sample from proposal density ¼(.) Weight each sample x (i) by p(x (i) ) / ¼(x (i) ) E.g.: p ¼ Requirement: if ¼(x) = 0 then p(x) = 0.

Particle Filters Revisited 1. Algorithm particle_filter( S t-1, u t, z t ): 2. S t 3. For i =1 n Generate new samples 4. Sample index j(i) from the discrete distribution given by w t-1 i 5. Sample x t from 6. w i t = p(z x t t i Compute importance weight i 7. η = η + w t Update normalization factor i i 8. St = St { < xt, wt > } Insert 9. For =, η = 0 i =1 n i i 10. w t = wt /η Normalize weights 11. Return S t i )p(x i i t x t!1,u t, z t )! (x t i x t!1! (x t x j(i) t!1,u t, z t ),u t )

Optimal Sequential Proposal ¼(.) Optimal! (x t x i,u, z ) t!1 t t = p(x t x i t!1,u t, z t ) à w i t = p(z t x i t)p(x i t x i t 1,u t ) π(x i t x i t 1,u t,z t ) = p(z t x i t)p(x i t x i t 1,u t ) p(x i t x i t 1,u t,z t ) Applying Bayes rule to the denominator gives: p(x i t x i t 1,u t,z t )= p(z t x i t,u t,x i t 1)p(x i t x i t 1,u t ) p(z t x i t 1,u t) Substitution and simplification gives

Optimal Sequential Proposal ¼(.) Optimal! (x t x i,u, z ) t!1 t t = p(x t x i t!1,u t, z t ) à Challenges: Typically difficult to sample from p(x t x i,u, z ) t!1 t t Importance weight: typically expensive to compute integral

Example 1: ¼(.) = Optimal Proposal Nonlinear Gaussian State Space Model Nonlinear Gaussian State Space Model: Then: with And:

Example 2: ¼(.) = Motion Model à the standard particle filter

Example 3: Approximating Optimal ¼ for Localization [Grisetti, Stachniss, Burgard, T-RO2006] One (not so desirable solution): use smoothed likelihood such that more particles retain a meaningful weight --- BUT information is lost Better: integrate latest observation z into proposal ¼

Example 3: Approximating Optimal ¼ for Localization: Generating One Weighted Sample Build Gaussian Approximation to Optimal Sequential Proposal 1. Initial guess 2. Execute scan matching starting from the initial guess, resulting in pose estimate. 3. Sample K points in region around. 4. Proposal distribution is Gaussian with mean and covariance: 5. Sample from (approximately optimal) sequential proposal distribution. 6. Weight = p(z t x,m)p(x x i t 1,u t )dx η i x

Example 3: Example Particle Distributions [Grisetti, Stachniss, Burgard, T-RO2006] Particles generated from the approximately optimal proposal distribution. If using the standard motion model, in all three cases the particle set would have been similar to (c).

Resampling Consider running a particle filter for a system with deterministic dynamics and no sensors Problem: While no information is obtained that favors one particle over another, due to resampling some particles will disappear and after running sufficiently long with very high probability all particles will have become identical. On the surface it might look like the particle filter has uniquely determined the state. Resampling induces loss of diversity. The variance of the particles decreases, the variance of the particle set as an estimator of the true belief increases.

Resampling Solution I Effective sample size: Example: Normalized weights All weights = 1/N à Effective sample size = N All weights = 0, except for one weight = 1 à Effective sample size = 1 Idea: resample only when effective sampling size is low

Resampling Solution I (ctd)

Resampling Solution II: Low Variance Sampling M = number of particles r 2 [0, 1/M] Advantages: More systematic coverage of space of samples If all samples have same importance weight, no samples are lost Lower computational complexity

Resampling Solution III Loss of diversity caused by resampling from a discrete distribution Solution: regularization Consider the particles to represent a continuous density Sample from the continuous density E.g., given (1-D) particles sample from the density:

Particle Deprivation = when there are no particles in the vicinity of the correct state Occurs as the result of the variance in random sampling. An unlucky series of random numbers can wipe out all particles near the true state. This has non-zero probability to happen at each time à will happen eventually. Popular solution: add a small number of randomly generated particles when resampling. Advantages: reduces particle deprivation, simplicity. Con: incorrect posterior estimate even in the limit of infinitely many particles. Other benefit: initialization at time 0 might not have gotten anything near the true state, and not even near a state that over time could have evolved to be close to true state now; adding random samples will cut out particles that were not very consistent with past evidence anyway, and instead gives a new chance at getting close the true state.

Particle Deprivation: How Many Particles to Add? Simplest: Fixed number. Better way: Monitor the probability of sensor measurements which can be approximated by: Average estimate over multiple time-steps and compare to typical values when having reasonable state estimates. If low, inject random particles.

Noise-free Sensors Consider a measurement obtained with a noise-free sensor, e.g., a noise-free laser-range finder---issue? All particles would end up with weight zero, as it is very unlikely to have had a particle matching the measurement exactly. Solutions: Artificially inflate amount of noise in sensors Better proposal distribution (e.g., optimal sequential proposal)

Adapting Number of Particles: KLD-Sampling E.g., typically more particles need at the beginning of localization run Idea: Partition the state-space When sampling, keep track of number of bins occupied Stop sampling when a threshold that depends on the number of occupied bins is reached If all samples fall in a small number of bins à lower threshold

z_{1-±}: the upper 1-± quantile of the standard normal distribution ± = 0.01 and ² = 0.05 works well in practice

KLD-sampling

KLD-sampling