TSRT14: Sensor Fusion Lecture 8

Similar documents
TSRT14: Sensor Fusion Lecture 9

TSRT14: Sensor Fusion Lecture 1

Lecture 7: Optimal Smoothing

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

AUTOMOTIVE ENVIRONMENT SENSORS

COS Lecture 16 Autonomous Robot Navigation

Introduction to Unscented Kalman Filter

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

Sensor Fusion: Particle Filter

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

TSRT14: Sensor Fusion Lecture 6. Kalman Filter (KF) Le 6: Kalman filter (KF), approximations (EKF, UKF) Lecture 5: summary

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

The Unscented Particle Filter

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Bayesian Methods in Positioning Applications

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

9 Multi-Model State Estimation

Local Positioning with Parallelepiped Moving Grid

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu

Parameter Estimation in a Moving Horizon Perspective

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

RAO-BLACKWELLIZED PARTICLE FILTER FOR MARKOV MODULATED NONLINEARDYNAMIC SYSTEMS

The Kalman Filter ImPr Talk

Sensor Fusion, 2014 Lecture 1: 1 Lectures

(Extended) Kalman Filter

State Estimation Based on Nested Particle Filters

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

SEQUENTIAL MONTE CARLO METHODS WITH APPLICATIONS TO COMMUNICATION CHANNELS. A Thesis SIRISH BODDIKURAPATI

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Computer Intensive Methods in Mathematical Statistics

Simultaneous Localization and Mapping

Lecture 6: Bayesian Inference in SDE Models

IN particle filter (PF) applications, knowledge of the computational

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

State Estimation using Moving Horizon Estimation and Particle Filtering

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

2D Image Processing (Extended) Kalman and particle filter

Lecture Note 12: Kalman Filter

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

6.4 Kalman Filter Equations

A new unscented Kalman filter with higher order moment-matching

Introduction to Probabilistic Graphical Models: Exercises

Autonomous Mobile Robot Design

Self Adaptive Particle Filter

Sensor Tasking and Control

2D Image Processing. Bayes filter implementation: Kalman filter

An Introduction to Particle Filtering

Lecture Particle Filters

Evolution Strategies Based Particle Filters for Fault Detection

Bayesian Filters for Location Estimation and Tracking An Introduction

E190Q Lecture 11 Autonomous Robot Navigation

Speaker Tracking and Beamforming

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

CS 532: 3D Computer Vision 6 th Set of Notes

Computer Intensive Methods in Mathematical Statistics

An introduction to particle filters

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Particle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Filtering Strategies for Nonlinear Systems

Probabilistic Graphical Models

Terrain Navigation Using the Ambient Magnetic Field as a Map

Particle Filters. Outline

From Bayes to Extended Kalman Filter

Monte Carlo Approximation of Monte Carlo Filters

A Note on the Particle Filter with Posterior Gaussian Resampling

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Factor Analysis and Kalman Filtering (11/2/04)

Topics in Nonlinear and Robust Estimation Theory

1 Kalman Filter Introduction

2D Image Processing. Bayes filter implementation: Kalman filter

Probabilistic Graphical Models

Solutions for examination in Sensor Fusion,

EnKF-based particle filters

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

A Tree Search Approach to Target Tracking in Clutter

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Gaussian Mixture Filter in Hybrid Navigation

11 - The linear model

Internal Covariate Shift Batch Normalization Implementation Experiments. Batch Normalization. Devin Willmott. University of Kentucky.

4 Derivations of the Discrete-Time Kalman Filter

Rao-Blackwellized Particle Filter for Multiple Target Tracking

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Linear and nonlinear filtering for state space models. Master course notes. UdelaR 2010

Smoothing Algorithms for the Probability Hypothesis Density Filter

Lecture 9. Time series prediction

Implementation of Particle Filter-based Target Tracking

X t = a t + r t, (7.1)

Moving Horizon Estimation with Dynamic Programming

Data Fusion Kalman Filtering Self Localization

Terminology. Experiment = Prior = Posterior =

Transcription:

TSRT14: Sensor Fusion Lecture 8 Particle filter theory Marginalized particle filter Gustaf Hendeby gustaf.hendeby@liu.se

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 1 / 25 Le 8: particle filter theory, marginalized particle filter Whiteboard: ˆ PF tuning and properties Slides: ˆ Proposal densities and SIS PF ˆ Marginalized PF (MPF)

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 2 / 25 Lecture 7: summary Basic SIR PF algorithm Choose the number of particles N. ˆ Initialization: Generate x (i) 0 p x0, i = 1,..., N, particles Iterate for k = 1, 2,..., t: 1. Measurement update: For k = 1, 2,..., w (i) k k = w(i) k k 1 p(y k x (i) k ). 2. Normalize: w (i) k k := w(i) k k / i w(i) k k. 3. Estimation: MMSE ˆx k k N i=1 w(i) k k x(i) k or MAP. 4. Resampling: Bayesian bootstrap: Take N samples with replacement from the set {x (i) k }N i=1 where the probability to take sample i is w (i) k k. Let w(i) k k = 1/N. 5. Prediction: Generate random process noise samples v (i) k p vk, x (i) k+1 = f(x(i) k, v(i) k ) w k+1 k = w k k.

Particle Filter Theory

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 4 / 25 Particle Filter Design: design choices 1. Choice of N is a complexity vs. performance trade-off. Complexity is linear in N, while the error in theory is bounded as g k /N, where g k is polynomial in k but independent of n x. 1 2. N eff = controls how often to resample. Resample if i (w(i) k )2 N eff < N th. N th = N gives SIR. Resampling increases variance in the weights, and thus the variance in the estimate, but it is needed to avoid depletion. 3. The proposal density. An appropriate proposal makes the particles explore the most critical regions, without wasting efforts on meaningless state regions. 4. Pretending the process (and measurement) noise is larger than it is (dithering, jittering, roughening) is as for the EKF and UKF often a sound ad hoc solution to avoid filter divergence.

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 5 / 25 Common Particle Filter Extensions ˆ Main problem with basic SIR PF: depletion. After a while, only one or a few particles are contributing to ˆx. ˆ The effective number of samples, N eff is a measure of this. N eff = N means that all particles contribute equally, and N eff = 1 means that only one has a non-zero weight. ˆ Too few design parameters, more degrees of freedom: ˆ Sequential importance sampling: means that you only resample when needed, N eff < N th. ˆ The theory allows for a general proposal distribution q(x (i) k x(i) 0:k 1, y 1:k) for how to sample a new state in the time update. The prior q(x (i) k x(i) 0:k 1, y 1:k) = p(x (i) k x(i) k 1 ) is the standard option, but there might be better ones.

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 6 / 25 SIS PF Algorithm Choose the number of particles N, a proposal density q(x (i) k x(i) 0:k 1, y 1:k), and a threshold N th (for instance N th = 2 3 N). ˆ Initialization: Generate x (i) 0 p x0 and ω (i) 1 0, i = 1,..., N. Iterate for k = 1, 2,... : 1. Measurement update: For i = 1, 2,..., N: w (i) k k w(i) k k 1 p(y k x (i) k ), normalize w(i) k k. 2. Estimation: MMSE ˆx k k N i=1 w(i) k k x(i) k k. 3. Resampling: Resample with replacement when 1 N eff = i < N th. (w(i) k k )2 4. Prediction: Generate samples x (i) k+1 q(x k x (i) k 1, y k), update the weights w (i) k+1 k w(i) k k w (i) k+1 k. p(x (i) k q(x (i) k x(i) x(i) k 1 ) k 1,y k), normalize

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 7 / 25 Sampling from Proposal Density SIR PF samples from the prior x (i) k+1 p(x k+1 x (i) k ). In general, one can sample from any proposal density, x (i) k+1 q(x k+1 x (i) k, y k+1). Note that we are allowed to cheat and look at the next measurement y k+1 when we sample. Note that the time update can be written p(x k+1 x k ) p(x k+1 y 1:k ) = q(x k+1 x k, y k+1 ) R nx q(x k+1 x k, y k+1 ) p(x k y 1:k ) dx k. The new approximation becomes ˆp(x 1:k+1 y 1:k ) = N p(x (i) k+1 x(i) k ) k, y k+1) w(i) k k }{{} w (i) k+1 k i=1 q(x (i) k+1 x(i) δ(x 1:k+1 x (i) 1:k+1 ).

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 8 / 25 Choice of Proposal Density 1. Factorized form q(x 0:k y 1:k ) = q(x k x 0:k 1, y 1:k )q(x 0:k 1 y 1:k ). In the original form, we sample trajectories. 2. Approximate filter form q(x 0:k y 1:k ) q(x k x 0:k 1, y 1:k ). In the approximate form, we keep the previous trajectory and just append x k.

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 9 / 25 Proposals: (1) optimal form q(x k x (i) k 1, y k) = p(x k x (i) k 1, y k) = p(y k x k )p(x k x (i) w (i) k k = w(i) k 1 k 1 p(y k x k 1 ). k 1 ) p(y k x (i) k 1 ), Optimal since the sampling process of x k does not influence (that is, increase the variance of) the weights. Drawbacks: ˆ It is generally hard to sample from this proposal density. ˆ It is generally hard to compute the weight update needed for this proposal density, since it would require to integrate over the whole state space to obtain something computable p(y k x k 1 ) = p(y k x k )p(x k x k 1 ) dx. For linear (linearized) Gaussian likelihood and additive Gaussian process noise, the integral can be solved, leading to a (extended) KF time update.

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 10 / 25 Proposals: (2) prior q(x k x (i) k 1, y k) = p(x k x (i) k 1 ), w (i) k k = w(i) k 1 k 1 p(y k x (i) k ). The absolutely simplest and most common choice.

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 11 / 25 Proposals: (3) likelihood q(x k x (i) k 1, y k) = p(y k x k ), w (i) k k = w(i) k 1 k 1 p(x k x (i) k 1 ). Good in high SNR applications, when the likelihood contains more information about x than the prior. Drawback: The likelihood is not always invertible in x.

Marginalized Particle Filter

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 13 / 25 Marginalized Particle Filter (1/2) Objective: decrease the number of particles for large state spaces (say n x > 3) by utilizing partial linear Gaussian substructures. The task of nonlinear filtering can be split into two parts: 1. Representation of the filtering probability density function. 2. Propagation of this density during the time and measurement update stages. Possible to mix a parametric distribution in some dimensions with grid/particle represention in the other dimensions. True Particle Mixed x l x n x l x n x l x n

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 14 / 25 Marginalized Particle Filter (2/2) ˆ Model x n k+1 = f n k (xn k ) + F n k (xn k )xl k + Gn k (xn k )wn k, x l k+1 = f l k (xn k ) + F l k (xn k )xl k + Gl k (xn k )wl k, y k = h k (x n k ) + H k(x n k )xl k + e k. All of w n, w l, e k and x k 0 are Gaussian. xn 0 can be general. ˆ Basic factorization holds: conditioned on x n 1:k, the model is linear and Gaussian. ˆ This framework covers many navigation, tracking and SLAM problem formulations! Typically, position is the nonlinear state, while all other ones are (almost) linear where the (extended) KF is used.

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 14 / 25 Marginalized Particle Filter (2/2) ˆ Model x n k+1 = f n k (xn k ) + F n k (xn k )xl k + Gn k (xn k )wn k, x l k+1 = f l k (xn k ) + F l k (xn k )xl k + Gl k (xn k )wl k, y k = h k (x n k ) + H k(x n k )xl k + e k. All of w n, w l, e k and x k 0 are Gaussian. xn 0 can be general. ˆ Basic factorization holds: conditioned on x n 1:k, the model is linear and Gaussian. ˆ This framework covers many navigation, tracking and SLAM problem formulations! Typically, position is the nonlinear state, while all other ones are (almost) linear where the (extended) KF is used.

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 15 / 25 Marginalized Particle Filter: key factorization Split the state vector into two parts ( linear and nonlinear ) ( ) x n x k = k x l. k The key idea in the MPF is the factorization p(x l k, xn 1:k y 1:k) = p(x l k xn 1:k, y 1:k)p(x n 1:k y 1:k). The KF provides the first factor, and the PF the second one (requires marginalization as in implicit step)!

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 16 / 25 Marginalized Particle Filter: factorization KF factor provided by the Kalman filter p(x l k xn 1:k, y 1:k) = N (ˆx l k k, P k k l ). PF factor given by marginalization procedure p(x n 1:k+1 y 1:k) = p(x n k+1 xn 1:k, y 1:k)p(x n 1:k y 1:k) = p(x n 1:k y 1:k) p(x n k+1 xl k, xn 1:k, y 1:k)p(x l k xn 1:k, y 1:k) dx l k = p(x n 1:k y 1:k) p(x n k+1 xl k, xn 1:k, y 1:k)N (ˆx l k k, P k k l ) dxl k.

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 17 / 25 Example: marginalized particle filter Terrain navigation in 1D. Unknown velocity considered as a state: x k+1 = x k + u k + T 2 s 2 v k u k+1 = u k + T s v k y k = h(x k ) + e k. Conditional on trajectory x 1:k, the velocity is given by a linear and Gaussian model: u k+1 = u k + T s v k x k+1 x k = u k + T 2 s 2 v k dynamics measurement. Given this trajectory, KF time updates linear part: x k+1 = x k + û k k + T 2 s 2 v k, cov(û k ) = P k k y k = h(x k ) + e k.

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 18 / 25 Marginalized Particle Filter: principal algorithm 1. PF time update using where x l k is interpreted as process noise. 2. KF time update using for each particle x n,(i) 1:k. 3. KF extra measurement update using for each particle x n,(i) 1:k. 4. PF measurement update and resampling where x l k is intepreted as measurement noise. 5. KF measurement update for each particle x n,(i) 1:k. If there is no linear term in the measurement equation, the KF measurement update in 5 disappears, and the Ricatti equation for P becomes the same for all sequences x n 1:k. That is, only one Kalman gain for all particles!

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 19 / 25 Marginalized Particle Filter: information flow ˆ There are five indeces k in the right hand side factorization of the prior. ˆ Each index is stepped up separately. ˆ The order is important! Prior p(x l k, xp 1:k y 1:k) = p(x l k xp 1:k, y 1:k)p(x p 1:k y 1:k) 1. PF TU p(x p 1:k y 1:k) p(x p 1:k+1 y 1:k) 2. KF TU p(x l k xp 1:k, y 1:k) p(x l k+1 xp 1:k, y 1:k) 3. KF dyn MU p(x l k+1 xp 1:k, y 1:k) p(x l k+1 xp 1:k+1, y 1:k) 4. PF MU p(x p 1:k+1 y 1:k) p(x p 1:k+1 y 1:k+1) 5. KF obs MU p(x l k+1 xp 1:k+1, y 1:k) p(x l k+1 xp 1:k+1, y 1:k+1) Posterior p(x l k+1, xp 1:k+1 y 1:k+1) = p(x l k+1 xp 1:k+1, y 1:k+1)p(x p 1:k+1 y 1:k+1)

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 20 / 25 Marginalized Particle Filter: properties MPF compared to full PF gives: ˆ Fewer particles needed. ˆ Less variance. ˆ Less risk of divergence. ˆ Less tuning of importance density and resampling needed. The price to paid is that the algorithm is more complex.

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 21 / 25 Variance Formula The law of total variance says that cov(u) = cov ( E(U V ) ) + E ( cov(u V ) ). Example x 0.5N ( 1, 1) + 0.5N (1, 1) Let U = N (0, 1) and V the mode ±1. Then E(x) = 0, cov(x) = ( 0.5 (1 0) 2 + 0.5 ( 1 0) 2) + (0.5 1 + 0.5 1) = 2

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 22 / 25 Property: variance reduction Letting U = x l k and V = xn 1:k the variance of the PF: gives the following decomposition of cov(x l k }{{} ) = cov ( E(x l k xn 1:k )) + E ( cov(x l k xn 1:k )) P F N = cov (ˆx l k k (xn,i 1:k }{{ )) } MP F + i=1 wk i P k k(x n,i 1:k }{{} ) KF.

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 23 / 25 Decreasing variance or N 12 10 Covariance for linear states MPF PF KF 8 Pl 6 4 2 0 10 2 10 3 10 4 10 5 N Schematic view of how the covariance of the linear part of the state vector depends on the number of particles for the PF and MPF, respectively. The gain in MPF is given by the Kalman filter covariance, which is more efficiently estimated without using particles.

Summary

TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 25 / 25 Filter Summary ˆ Approximate the model to a case where an optimal algorithm exists: ˆ EKF1 approximates the model with a linear one. ˆ UKF and EKF2 apply higher order approximations. Gives an approximate Gaussian posterior. ˆ Approximate the optimal nonlinear filter for the original model: ˆ Point-mass filter (PMF) which uses a regular grid of the state space and applies the Bayesian recursion. ˆ Particle filter (PF) which uses a random grid of the state space and applies the Bayesian recursion. Gives a sample-based numerical approximation of the posterior.

Gustaf Hendeby hendeby@isy.liu.se www.liu.se